We recently purchased two Dell R910 servers to replace four IBM x3650s. The new Dell servers have 12 networking ports on them through 3 physical four port cards... one being an onboard four port NIC. Wanting to follow best practices for configuration, can anyone offer any best practices for configuring these ports? Here's what i'm thinking...
vSwitch0 - Service Console and vMotion
vSwitch1 - VLAN x VMs
vSwitch2 - VLAN y VMs
pNIC1 (onboard) - vmnic0, vmnic1, vmnic2, vmnic3
pNIC2 - vmnic4, vmnic5, vmnic6, vmnic7
pNIC3 - vmnic8, vmnic9, vmnic10, vmnic11
A few questions...
1. Are four ports for SC and Vmotion overkill?
2. If yes to above, would there be an advantage to reallocate any additional ports to either VM pool?
Thanks in advance for any input/suggestions.
Nah, 4 NICs for SC and vMotion isn't overkill, especially not now when you can do 4 concurrent vMotions in 4.1. (8 with 10GBe)
Are all pNICs in trunk ports in the switches?
I don't know why you split vSwitch1 and vSwitch2, I'd rather use one vSwitch for all VMs with 8 pNICs.
If you are not going to use ESX Enterprise PLUS (license to use distributed switches with "Route based on physical NIC load" load balancing) and 10Gb uplinks I can't see a benefit of using four NICs for SC and VMK.
I would use embedded pNIC1 / vmnic0 (also handy for scripted installs) and any of the pNICs on the add-on cards for SC and VMK.
If you have the same VLANs trunked to all other pNICs then create one vSwitch - no need for two.
If you have different VLANs on different network connections then group them together accross embedded and add-on NICs
Production: vmnic1, vmnic4, vmnic8
TEST: vmnic2, vmnic5, vmnic9
Also make sure the add-on quad-port NICs are on different PCIe buses...