I think you're right. If you can apply the configuration to use 2 uplink ports if there is a capacity of for example 4, that just means you have less capacity from that particular host. Are the 2 and 4 10-Gbit NICs used for VM traffic as well as management/iSCSI/vMotion?
Looking forward hearing from you,
Thanks for the reply Rene,
I simplified a bit so I wouldn't bury the lead. The configuration is a bit more complex than described. To answer your specific question, the dVSwitch in question will carry VM traffic and ESXi Host management traffic. VMotion and Fault Tolerance will be handled on other interfaces not mentioned here.
Alright so when losing one of two uplinks (of the capacity of 4) you're not pushing away VM traffic with a VMotion.. Great =)
If I had to approach this, I would just test this out (physically or virtually) with a capacity of 4 uplink ports, having 2 hosts with 2 uplink ports and 2 hosts with 4 uplink ports.
Test out if all uplink ports are using on the 4-port hosts and test out failover/failback.
Thanks. I appreciate the replies Rene. I'll definately be testing but the timelines are a bit compressed so wanted to see if I could get independent verification from someone that may be familiar with this specific scenerio. I'm sure some are using clusters with mixed hardware and different pNIC counts so, it's logical that a 4-Uplink dVS should be able to support any Hosts with 4-pNICs or under. Just isn't intuitive and found myself scratching my head when adding the 2-pNIC Host to the dVS.
That being said, if anyone out there does have this configured, I'd be interested in hearing your experiences or configuration recommendations.
I have validated this configuration as previously described. A dVS with 4 Uplinks can support Hosts with "up to" 4 interfaces. Hosts with only 2 interfaces will have 2 of the 4 dVS uplinks associated with pNICs while the other 2 will remain unused. In my case, I decided to use Uplink 1 and 2. Uplinks 3 and 4 are not used with these Hosts.