VMware Cloud Community
uyozTic
Enthusiast
Enthusiast
Jump to solution

swapping 2-port nic for 4-port nic on host

host: ESXi 6.5

i want to remove this 2-port NIC (EXP19402PT) and replace with same card but with 4-port (EXP19404PT)..

no other expansion slots available to add new card to and not remove the 2-port one, so the 2-port gets replaced..

i've looked at couple of KB docs about this, and it doesn't seem to be a big issue doing this. it's a home system.

anyone know of gotcha's to be on the lookout for when i do this?

1 Solution

Accepted Solutions
Dave_the_Wave
Hot Shot
Hot Shot
Jump to solution

The drivers are all built into your install already, just go ahead and swap it.

You may have to setup the IPv4/IPv6 settings once again on the head console after the host boots up, and then client into it to redo your vSwitch0 and vmk0.

The diagrams on this thread may help:

https://communities.vmware.com/thread/583631

View solution in original post

0 Kudos
4 Replies
Dave_the_Wave
Hot Shot
Hot Shot
Jump to solution

The drivers are all built into your install already, just go ahead and swap it.

You may have to setup the IPv4/IPv6 settings once again on the head console after the host boots up, and then client into it to redo your vSwitch0 and vmk0.

The diagrams on this thread may help:

https://communities.vmware.com/thread/583631

0 Kudos
uyozTic
Enthusiast
Enthusiast
Jump to solution

appreciate your link to your post, "How to use vmnic0 and vmnic1 properly for performance". very helpful!

your statement in that thread,  "I'd like a standard set up where host1 will run vm's off the datastore of host2. Because running vm's off their own host's datastore is a no no." caght my attention. my host does have its datastores local to it each on their own ssd. i'll search KB's on that & maybe post a diff thread about that.

thanks for helping with the nic swap issue.

Dave_the_Wave
Hot Shot
Hot Shot
Jump to solution

That's right.

If you have a small Datacenter without any vSAN or High Availability, generally it is a good idea to have one tough resilient box as a datastore, used by other hosts installed on less critical boxes, and those can be booted off a flash drive. Example, inside HPE ProLiant servers, the mainboard has a space to insert your choice SD card.

Should a host die (something simple like raid card fries, or power supply smells like burnt toast), we login to another working host, re-add the accidented .vmx's into its Inventory, and the VM is up and running again within minutes.

But hosts with each their own datastore, if host1/datastore1 were to die, it can take hours to get it back online, assumed best case scenario you have spare parts on hand, or perhaps the time it takes to fetch from a backup, whatever is done, is surely not going to be minutes.

Most servers have two 1Gb Physical Adapters for a minimal requirement for nic teaming, four pipelined together makes it pretty solid, and any moar you'd have to go 10Gb twinax.

0 Kudos
uyozTic
Enthusiast
Enthusiast
Jump to solution

i see. makes for good sense/practice. at first i did have datastore on my synology nas but synology updates/reboots became a pain so i elected then to get a few ssd's and put them in the host.  for my home/hobby/min-budget a host+datastore box fit the bill. now reading from your educated experience, maybe, a new home project for me will be putting those ssd's in their own box. i do backup the flash esxi flash boot sticks, test them, and backup the vm's, though Murphy's Law is never too far away.

i've gotten way off topic but glad you explained the datastore separation best practice along with example! very helpful.