HP DL585 with a Broadcom dual port NIC onboard and three Intel dual port NICs in the PCI slots, making a total of 8 physical NIC ports.
Spread the network load in four ways: Service Console VLAN tagged, VirtualMachine VLAN tagged, VirutalMachine private switch, VMotion private switch.
2 wires pulled from the private switch for the VMs and VMotion (vmnic1 and vmnic2)
6 wires attached to the trunked switch in our rack (vmnic0, vmnic3/4/5/6/7).
vSwitch0 - Service Console port group, port groups for each of the VM VLAN tags.
vSwitch1 - VMkernel port group for VMotion, private switch port group for VMs.
vSwitch0 has good redundancy and failover amongst six physical ports, changed number of ports to 120 for capacity, although we do not expect to go beyond 60.
vSwitch1 has redundancy and failover across ports and changed number of ports to 120.
Does anyone see any issues with this setup? Is mixing VMkernel and VM traffic in same switch a problem or does the port group isolation take care of that?
Be aware that running your Service Console in a VLAN trunck, can give you problems when the vmkernel is not running, like in rescue mode.
Also, don't mix the 2 ports from one nic in the same vswitch. Should the pci card fail, you would immediately lose 2 nics \!!!
My setup would be:
\- buy a third dual port nic.
\- Service Console on internal nic1 and port1 of pci nic 1
\- Vmotion on internal nic2 and port 1 of pci nic2
\- Virtual Machines on port2 of pci nic1, port2 of pci nic2 and then use port 1 and 2 on nic3.
My real world DL585 config looks like this:
Nr Nic Functie VLAN P-Switch
1 External 1 Virtual Machines VLAN trunck P-Switch 1
2 External 1 VMotion VLAN VMotion P-Switch 1
3 External 2 Virtual Machines VLAN trunck P-Switch 1
4 External 2 Service Console VLAN MGMT P-Switch 1
5 External 3 Virtual Machines VLAN trunck P-Switch 1
6 External 3 VMotion VLAN VMotion P-Switch 2
7 External 4 Virtual Machines VLAN trunck P-Switch 2
8 External 4 Service Console VLAN MGMT P-Switch 2
9 Internal 1 Not patched
10 Internal 2 Not patched
The VMotion and MGMT vlans are not truncked. Extra redundancy is done by using 2 physical switches.
Gabrie
There isn't an issue with it but the recommended setup is the the console NIC have it's own dedicated NIC. The reasoning is that if you sared the console NIC with VM's and and application on one of those VM's started hashig the network it could effect your abiliy to connect to your console.
Makes sense, I could create another vSwitch that is just the console and would still have good capacity left.
vSwitch0 - SC with two vmnic port
vSwithc1 - VMs with four vmnic ports
vSwitch2 - VMkernel/VMotion, private VM with two vmnic ports
Any issues in having VMotion on same switch as VM traffic?
If no one else adds more insights I will award you all the points.
If you have that many NIC's I would definitely isolate the VMKernel from the VM's and give vMotion it's own NIC...
VMotion Minimum Network Requirements
! Two NICs with at least one GigE NIC dedicated to VMotion.
! For best security, dedicate the GigE NIC to VMotion and use VLANs to divide the Virtual machine and management traffic on the other NIC.
Network Best Practices
! One dedicated NIC for the service console (10/100 or GigE).
! One dedicated NIC for VMotion (GigE).
! One or more NICs for virtual machines (10/100 or GigE).
Here's some good network guides also...
VMware ESX Server 3 802.1Q VLAN Solutions - http://www.vmware.com/pdf/esx3_vlan_wp.pdf
Networking Virtual Machines - http://download3.vmware.com/vmworld/2006/TAC9689-A.pdf
Networking Scenarios & Troubleshooting - http://download3.vmware.com/vmworld/2006/tac9689-b.pdf
ESX3 Networking Internals - http://www.vmware-tsx.com/download.php?asset_id=41
High Performance ESX Networking - http://www.vmware-tsx.com/download.php?asset_id=43
Network Throughput in a Virtual Infrastructure - http://www.vmware.com/pdf/esx_network_planning.pdf
Hello,
vmotion really needs to be on it's own nic(s) and on it's own VLAN
You could look at a config for the service console where it has a dedicated nic for normal use and then has a standby that is shared with the vms. the vms would not use the service consoles nics but would use the remaining and only share with the service console in the event a failure on the service console nic.
If you do have to combine roles such as the SC, vmkernel or vmotion onto the same vswitch that has 2 pNICs or more, you can override the NIC teaming properites so that you would end up with
vswitch0
SC - vmnic0 - active; vmnic1 - standby
Vmotion - vmnic1 -active; vmnic0 - standby.
Be aware that running your Service Console in a VLAN trunck, can give you problems when the vmkernel is not running, like in rescue mode.
Also, don't mix the 2 ports from one nic in the same vswitch. Should the pci card fail, you would immediately lose 2 nics \!!!
My setup would be:
\- buy a third dual port nic.
\- Service Console on internal nic1 and port1 of pci nic 1
\- Vmotion on internal nic2 and port 1 of pci nic2
\- Virtual Machines on port2 of pci nic1, port2 of pci nic2 and then use port 1 and 2 on nic3.
My real world DL585 config looks like this:
Nr Nic Functie VLAN P-Switch
1 External 1 Virtual Machines VLAN trunck P-Switch 1
2 External 1 VMotion VLAN VMotion P-Switch 1
3 External 2 Virtual Machines VLAN trunck P-Switch 1
4 External 2 Service Console VLAN MGMT P-Switch 1
5 External 3 Virtual Machines VLAN trunck P-Switch 1
6 External 3 VMotion VLAN VMotion P-Switch 2
7 External 4 Virtual Machines VLAN trunck P-Switch 2
8 External 4 Service Console VLAN MGMT P-Switch 2
9 Internal 1 Not patched
10 Internal 2 Not patched
The VMotion and MGMT vlans are not truncked. Extra redundancy is done by using 2 physical switches.
Gabrie
I very much appreciate everyone's input and wish there were more points to spread around. Here is what we finally decided on.
We have two physical switches with VLAN tagging and another private physical switch for a total of three physical switches. The onboard Broadcom NIC ports will not failover to the PCI Intel ports so we decided just to hold the two Broadcom ports in reserve. This leaves us 6 ports on 3 dual NICs to work with.
vmnic0, PCI slot 1, vSwitch0, Physical Switch1 VLAN tag - Service Console
vmnic5, PCI slot 2, vSwitch0, Physical Switch2 VLAN tag - Service Console
vmnic3, PCI slot 1, vSwitch1, Physical Switch1 VLAN tag - VM public network
vmnic6, PCI slot 3, vSwitch1, Physical Switch2 VLAN tag - VM public network
vmnic4, PCI slot 2, vSwitch3, Physical Switch Private no tagging - VMotion
vmnic7, PCI slot 3, vSwitch3, Physical Switch Private no tagging - VM private network
vmnic4, standby for vmnic7
vmnic7, standby for vmnic4
vmnic1, onboard, vSwitch3, Physical Switch1 VLAN tag - VM public network reserve
vmnic2, onboard, vSwitch3, Physical Switch2 VLAN tag - VM public network reserve
I appreciate the tip on not using VLAN tag on the service console, since it requires the vmkernel and will not work in rescue mode. Our thinking is that if we are in rescue mode in ESX then we will be in the machine room at the console or be using HP's light's out remote console and will not be depending on the network.
Can VMKernel be on a separate Gigabit switch altogether? How would this affect the default gateway setting for this connection type?
Your switch will be informed by ESX that the IP is on a different port when failover occurs (i think)
Gabrie