I am looking into optimizing traffic between VM's on multiple ESX hosts, and falling a bit short in figuring out which method would work best:
Currently the majority of my company connects via Terminal Servers over a private MPLS network. The Terminal Servers are hosted across two ESX Servers. The majority of our main applications are hosted on a 3rd ESX server.
My Corporate office is local to the servers and runs most of the software directly from their PC's.
The Corporate office vs Branch offices (coming in over MPLS) vs Servers are all already split across a few VLAN's to help isolate traffic.
What I am looking into/wondering is if it would be possible and beneficial to look into adding another 2 port NIC card onto my ESX hosts which is designed to handle traffic from a VM on one ESX to a VM on another VM - and if it is possible to do without modifying settings on the VM's themselves. Specifically setting up routes in the ESX hosts saying that any traffic going to specific IP's/range go over the other two NIC's.
Is that possible? Will it function? and would there be an advantage to doing this?
The connections are all currently 1GB to 1GB switches and there are 2 NIC's for the main LAN traffic. While it doesn't seem we are saturating the network yet, at the same time going over those 2 NIC's we have all of the information going from the TS to the user, plus all of the information going from the TS to the Application server. I ultimately want to isolate that without having to modify each of the VM's.
To answer your question - in a standard vSwitch or even Distributed vSwitch you cannot route traffic over specific Network Cards. Perhaps in the Nexus 1000v you could setup static routes per uplink, but thats just a guess.
In vSphere (4.x+) when VMs communicate with each other on the same host even if on a different subnet, it never touches the wire.
The only way to do what you're asking without the Nexus (perhaps even with?) is to add another NIC to the guests in a separate VLAN and then on the VMs themselves setup static routes so that they only use the second interface.
Well it would be common best practice to use nic teaming to bond two physical nics to one logical link between the vSwitches and the physical switch which would give the vSwitch 2GBit connection to the physical network. Please note that your switch has to support this.
Kind regards,
Gerrit Lehr
If you found this or other information useful, please consider awarding points for "Correct" or "Helpful".
Co-Author of the German reference book on Virtualization and VMware
"Das Virtualisierungsbuch" - http://www.cul.de/virtual2.html
Regular Author on Virtualization and other topics in the German FreeX IT Magazine by CuL
To answer your question - in a standard vSwitch or even Distributed vSwitch you cannot route traffic over specific Network Cards. Perhaps in the Nexus 1000v you could setup static routes per uplink, but thats just a guess.
In vSphere (4.x+) when VMs communicate with each other on the same host even if on a different subnet, it never touches the wire.
The only way to do what you're asking without the Nexus (perhaps even with?) is to add another NIC to the guests in a separate VLAN and then on the VMs themselves setup static routes so that they only use the second interface.
FredPeterson - Thanks that is what I thought, but I couldn't find anything giving a definitive answer.
And yes, I have the NIC's teamed at the moment - I was just trying to see if there was a way to setup a backend network without actually having to redo the infrstracture on each server (adding new vNIC, IP, possibly subnet, and custom routes).
I was just reviewing the way you can configure Distributed Virtual Switches and it looks like, probably as long as the VM is powered on and you use static port assignment, you can actually modify the port level and have it use a specific uplink/vmnic. You'd have to enable override on policies all the way down to the port level.
But then if you're using a dvs you'd need to screw with the dvuplink teaming for everybody in order to keep one of the uplink vmnics isolated but yet still part of the bigger group for failover purposes.
An option to think about/explore, depending on complexity of your environment.
I believe VM have to be modified.
1. Add NIC to ESX hosts, create vSwtiches and bind uplinks with these NIC.
2. Add virtual NIC into VM. Give them aother IP addresses. Make sure virtual NIC are in above vSwitches.
3. Add static routing to VM so traffics between above VM shall pass through these virtual NIC.
Seagle - I was looking for a solution that didn't require modifying most/all of my VM's. Something that could be put globally at the ESX layer.
I believe what I will probably do is just add another NIC card on and put it into the same team/vSwitch as the existing NIC's to increase the total available bandwidth rather than modifying every VM with custom NIC's and static routes.