I've been searching for an answer to this but I don't seem to be getting anywhere. Perhaps I'm asking the wrong question.
Basically I have a single ESXI host with a single guest and a manged gigabit switch.
Host is a Dell r610 with four GB NICs.
Guest is Windows Server 2012 R2 terminal server running hospitality apps, property management, office, etc. for remote users.
I'm upgrading my physical infrastructure to gigabit to help with our terrible visual fox pro hotel management suite. In the process I want to make use of all of the empty NICs that were never connected in the first place.
My question is should I pass all 4 physical NICs to the guest and team them in the OS?
On the other hand I could enable all 4 physical NICs and only pass one to the guest but will this benefit the guest in any way?
I'm not overly concerned about host management. I only take the host down once in a while to back it up.
Thank you for your help and I'm sorry if this has been addressed previously.
I would do both, teaming at ESXi host level and then assigning 2 or more vNICs to the VMs and teaming them at Windows level to have more bandwidth.
I think I misunderstood how the host handles multiple NICs. I was under the impression that I would have to pass each physical NIC (as a device) to the guest. This doesn’t appear to be the case at all. The way I understand it now is the Host sees a vSwitch and on the other side are the physical NICs.
I also am under the impression that I don’t need to worry about IP addressing on the other three NICs when I bring them online if I’m only trying to load balance and failover. Is this correct?
Hi mprobert, It might help if you clarify what you are trying to achieve and what issues you are facing.
From reading this trail, I understand that you have a single host with 4 physical NIC's and want to make use of additional up-links.
Is this to resolve performance issues or simply for redundancy?