VMKernel & Data traffic trunk port: bandwidth contention...
I got a hold of a design guide which recommends the following for an optimized network configuration on a Sphere physical server (using gigabit NIC's):
Gb Nic1: VM Service console (Mgt) & Data Traffic (Guest VM usable Network)
Gb Nic2: VM kernel for iScsi traffic
Gb Nic3: VM kernel for Vmotion NIC
I understand the concept behind this but am running into a potential design problem related to Nic1. I haven't tested this and am new to Vmware, so this may be an easy problem solved by pointing out an oversight.
If I have an admin who wants to move a VM OSE from physical server 1 to physical server 2, they use the VM Client, located on a 3rd computer, typically off the VM service console segment. When this move is initiated, questions related to this:
1) Does the traffic flow from VMServer1 -> VM Client PC -> VMServer2? Or does it go directly from VMServer1 -> VMServer2?
2) If this VM is represented by a large file (40GB, for example), how does the bandwidth get throttled so as not to kill the other running VM's network traffic on the Nic1? The assumption here is that this data network is trunked and carrying many VM's worth of traffic.
3) What is the best way to move these things around? It seems like some 10Gb NIC would be best but that’s expensive. Possibly cloning a LUN might work, but that’s a lot of admin work to set up a new VM. I kind of like trunking the 3 NIC’s (6 with redundancy) of the server into one fat pipe (802.3ad) and setting up priority queuing. Has anyone played with that?
Thx!
Hello WR111,
1) No traffic will pass throught the VM Client PC. It is all direct communication between the two hosts
2) First, where is the VMDK located? Local Storage or Shared Storage? If Shared Storage, you don't need to physically move the VMDK unless you want to offload disk I\O to a different datastore.
If you are only concerned with offloading CPU and RAM usage from one host to the other, and if the VM is on Shared Storage, consider VMotion instead of migrating vmdk files. Only Nic3 will be used to migrate the VM machine state and not the actual data files.
3) My answer to this will depend on what how the VM is stored (Local vs. Shared).
Regards,
Trevor
=====================================================================================
If any of my responses have been helpful in any way, please rate accordinly. Thank you and Happy VM'ing!
Hey raz, tx for reply. that helps. In some cases, VM transfer would be done via SAN only. And it sounds like the transfer between to Service console subnet Servers would
bypass the VM Client completely, which is a nice feature.
However, in some cases, we might have a DEV (subnet) VM -> SAN migration, for example. Or Dev VM -> QA VM. In these cases, I am wondering how the transfer would affect the trunk uplinks to the firewall, which is spearating out the DEV PC from the VM Service console subnet.
thanks,
Will
Hmm...In that case, consider NIC teaming if performance is noticeably unacceptable with the single network card setups. Unfortunately, it will be hard to anticipate performance impact without knowing a bit more about your network details.
Try running tests with your current setup and observing the Network Usage-related graphs in vCenter to estimate how performance would be impacted. For example, disk I\O might take the biggest hit on your network since you are using iSCSI storage. In such a case, if you are using multiple vSwitches, consider having two physical NICs attached to the Storage vSwitch in a teaming configuration.
Regards,
Trevor
=====================================================================================
If any of my responses have been helpful in any way, please rate accordinly. Thank you and Happy VM'ing! :smileygrin:
