I am building an ESX 4.0 host and I am wondering if I should seperate my vswitch for my network from my iSCSI network? Currently in the building stage I have one vswitch (vswitch0) with VM Network, VMKernel & Service Console. Would you leave it like this or should I move the VMKernel to its own vswitch (vswitch1)????
my immediate reaction is - YES! - separate your iSCSI traffic from all other traffic. If you have a 4-NIC host, you should dedicate a vSwitch and 2 uplinks for iSCSI even if it means all other port groups are sharing the other two NICs.
with that said, it isn't always a luxury to do this is smaller environments so you can technically do it. if you want to share your host details I can help with the recommend configuration.
To increase performance, to simplify troubleshooting and improve security use a dedicated network for your IP storage.
Better if is on a dedicated physical network, but also a dedicated VLAN (with good physical switches) could work fine.
You might not need all those nics for your iscsi network, esp if you've only got 3 VM's. Do you have a dual-nic and quad-nic card? Try giving more nics to vSwitch0 and using VLANs to seperate your SC network from VM Network.
vSwitch0 looks okay with the two uplinks assuming you don't mind having the Service Console on the same interfaces at the VM traffic (i personally would separate the SC traffic...and perhaps limit that traffic to a private, unrouted network if security is of concern).
vSwitch1 and associated iSCSI traffic can be reduced to 2 uplinks. the other option there is to create an additional vmkernel/iscsi port under that same vSwitch with each port prioritizing a different uplink and failing over to it's partner -- assuming your storage device supports it. this will provide full support for round-robin and multipathing as well as give you a performance boost.
I also noticed no dedicated uplinks for vmotion -- if that is something you plan on implementing, use one or two of the uplinks you freed up from vSwitch1 and keep vmotion traffic isolated.
On a standard vSwitch the physical NIC's are used in a round robin manner. Which means one VM will have max. 1GB/s.
What I would suggest, to use the NICs you have:
Attach 4 physical NICs to vSwitch0, this way each VM as well as the Service Console each can use a separate physical 1GB/s NIC.
Attach 2 physical NIC's to the iSCSI vSwitch1. This will give you the necessary redundancy.
With a 6 NIC config I would do as such
I would have
2 Nics for VMotion and Service Console
VMNIC0 in Active for vMotion and VMNIC1 for Stand-by
VMNIC1 Active for Service Console VMNIC0 for Stand-by
VMNIC2 and VMNIC 3 for all VM Network Traffic both Active
VMNIC4 and VMNIC5 for iSCSI traffic
VMNIC0 = Active SC / Standby vMotion / Stand by VMNetwork
VMNIC1 = Active vMotion / Standby SC / Stand by VM Network
VMNIC2 = Active VMNetwork
VMNIC3 = Active VMNetwork
VMNIC4 & VMNIC5 = iSCSI Traffic
Just so everyone knows my config, I have one host currently with 4 physical NIC's (two are dual port) and a Promise Technologies iSCSI SAN. I have 3 active VM's, one Windows 2008 server, one Windows 2003 server & one Windows 2003 terminal server that only has minimal use. I do not have VMotion (not even sure what vmotion is) and so far today the system seems to be running quite good. This upgrade was done yesterday (Sunday April 11th).
iSCSI traffic can be separated in a vSwitch from other traffic using vLANs.
Depending on your iSCSI storage device you may also have the option to take advantage of using Jumbo frames. Make sure that all the endpoints and all connections along the way can support them, and are configured to support Jumbo frames. If you have any links along the path that do not, you will be throttled to the lowest speed of link in the chain or the fasted speed that could be negotiated.
The default frame size for VMware vSwitches and many storage arrays is 1500, Jumbo frames are 9000 in size and widely supported for storage. You would need to configure a vSwitch for -m 9000, and the Kernel port group to -m 9000 as well. You can look at your physical switch counters to see if you are getting congestion errors and also by looking at the frame size statistics you can tell if the large frames are being passed. If they are not, there is a good chance Jumbo frames are not being passed from end to end. While your physical switches and components may be Jumbo frame capable, they usually are not configured that way out of the box.
ESX 4x also supports iSCSI software initiator multipathing where you could add an additional switch and vmknics to support the Round Robin PSP multipathing for load balancing and to also provide redundancy of network paths. After creating the additional vSwitch the kernel ports would also have to be linked to the iSCSI initiator. A simple way to verify that the multipathing is working is to open up esxtop and look at the kernel nic data transfers.
While most these steps are easily done with cli commands or the GUI, I believe that some like the iSCSI initiator to kernel port group mapping and initially creating the vSwitch -m 9000 has to be done using CLI for the current shipping ESX release.