I got curious so I did the following on a W2K3 VM on VMware Workstation. Since ESX 3 supports an VM Nic as an Intel PRO 1000 MT card, you should be able to try the same.
1) Added two vNICs to the VM.
2) Shut down VMware Workstation and edited the vmx to add the below. I already had an existing vNIC setup the same.
ethernet1.virtualDev = "e1000"
ethernet2.virtualDev = "e1000"
3) Booted up the VM and the downloaded Intel's Base driver, ProSet, ANS, etc package from here: http://www.intel.com/support/network/sb/cs-006120.htm
4) Ran the package and it udated the NIC drivers on all 3 vNICs.
5) Brought up the properties for one of the NICS in Windows, clicked on Configure for the NIC and then was able to create a new NIC team with the 2 vNICs that I added.
I had some funny ping behaviour which I fixed by rebooting the VM and disabling the original vNIC. Not sure if it was the reboot or disabling of the vNIC that fixed things. I setup the team with Adaptive Load Balancing, but with this install I have no option to test it so I'd be interested to here if it's able to improve things for you.
See these posts...
Bonding NICs on a VM - http://www.vmware.com/community/thread.jspa?messageID=561718򉈶
Can a VM use more then one physical NIC on a vSwitch - http://www.vmware.com/community/thread.jspa?messageID=632932򚡤
Aggregate multiple NICs into a single pipe - http://www.vmware.com/community/thread.jspa?threadID=83099&messageID=639830#639830
Fyi if you find this post helpful, please award points using the Helpful/Correct buttons.
Visit my website: http://vmware-land.com
Hi, There is no need to install multiple virtual NICs in order to get better throughput. Alle NICs in the virtual world run at bus speed, no matter how fast they report to be connected. These speed limits also do not apply to virtual switches.
It gets interesting ONLY when leaving the virtual infrastructure: To increase network performance to the real world, simply use NIC teaming at the physical end. So team mulitple Gbit NICs together to get better thoughput to the real world.
Only traffic within a vSwitch runs at bus speed. Any traffic leaving the vSwitch is subject to the maximum speed of the physical NIC assigned to the vSwitch.
How does network traffic route on vSwitches?
All traffic between VM's on the same vSwitch stays inside the server and does not hit the physical network. So essentially traffic stays within the box and travels at the speed of the subsystem. Virtual Switches allow VM guest communication over the switch at maximum bus/processor capability which on current hardware technology should easily exceed 100Mbps.
Same vSwitch is 'Routed Locally' doesn't hit any pNICs
Same vSwitch different PortGroup/VLAN same thing is 'Routed Locally' doesn't hit any pNICs
Between different vSwitches is 'Routed Externally' ie needs to leave one pNIC and come in the other pNIC of the other vSwitch.
You state "Same vSwitch different PortGroup/VLAN same thing is 'Routed Locally' doesn't hit any pNICs"
I do not understand this; different portgroup means different subnet, routed locally??? I hope not! This would seriously impact security on the VM side.
vSwitches DO NOT ROUTE. Meaning that "inter-VLAN" traffic will always go physical (twice) in order to get to the other side of the same vSwitch.
Message was edited by:
We both mean the same, sort of. The thread you mention also states that traffic from one portgroup to the other \*IS* routed through the physical network.
This HAS to be, for example I use ACLs (access control lists) to deny/permit network traffic between VLANs.
So, why route traffic across the network? Because you just HAVE to when crossing a subnet boundary (from one VLAN to another).
So keep in mind this might impact network performance! Especially if you have lots of inter-VLAN communication, even on VMs within a single box.