VMware Cloud Community
roneng
Enthusiast
Enthusiast

nic teaming in guest OS

Hi all

i have ESX3.5 which is connected to the network with 2 nics with load balance.

I want to connect a VM (2003 server) with more than 1gb to the network, can it be done? This VM needs network bandwidth.

and if it can be done, how should i configure the esx networking, and how should i configure the vm networking ?

Tahnks in advance for all answers

Reply
0 Kudos
12 Replies
rriva
Expert
Expert

Are you sure you really need more than 1 Gb/s ethernet troughput from your VM ?

I don't think Nic Teaming in a VM will be a supported solution.

Why not thinking of creating more VMs and use network load balancing between VMs to split (and increase) netwrok bandwith on more vm ?

If your VMs is Windows based, you can use Microsoft NLB, or third party NLB.

Otherwise you can look to hardware NLB (Network load balancer).

Riccardo Riva

If you found this or other information useful, please consider awarding points for "Correct" or "Helpful". Thank You!

RRiva | http://about.me/riccardoriva | http://www.riccardoriva.com
Reply
0 Kudos
roneng
Enthusiast
Enthusiast

What can i do , it is 1 server that is very network intensive.

I cant believe there is no way to connect a VM with more than 1gb

Reply
0 Kudos
rriva
Expert
Expert

what can I do too .... ? Smiley Wink

VMWare Tools drivers (enhance your network drivers and speed, and a lot of other things) doesn't support NIC Teaming.

So the only way to do what you want is to use NLB.

Otherwise you can't install VMWare tools ethernet drivers, use an e1000 Virtual NIC, download the Intel Drivers, and team two virtual NIC.

If you want to follow VMWare best practice, NIC Teaming will be only at ESX side, but if you want you can do it.

Hope this help (and hope it works ...).;)

There is no benefit in teaming NICs within a VM. The hardware failover

issue is not required (as previously stated in another reply) and the

bandwidth is not restricted to 1GB (that is just the driver model)

therefore adding a second nic won't get you 2GB.

The Bandwidth bottleneck is more likely to occur elsewhere.

If you get 2 x 1GB NIC's allocated to your virtual switch and this VM

is the only user of those nics it will get the potential of 2GB even

though it only has 1GB within the OS.

So - NIC teaming only gives you hassle and instability due to the

teaming drivers, 2 NICs in the VM only adds to the overhead on the ESX

kernel.

Riccardo Riva

If you found this or other information useful, please consider awarding points for "Correct" or "Helpful". Thank You!

RRiva | http://about.me/riccardoriva | http://www.riccardoriva.com
Reply
0 Kudos
rriva
Expert
Expert

for any other doubt look here

Riccardo Riva

If you found this or other information useful, please consider awarding points for "Correct" or "Helpful". Thank You!

RRiva | http://about.me/riccardoriva | http://www.riccardoriva.com
Reply
0 Kudos
AWo
Immortal
Immortal

Some thoughts on this: http://communities.vmware.com/message/842082

AWo

vExpert 2009/10/11 [:o]===[o:] [: ]o=o[ :] = Save forests! rent firewood! =
Reply
0 Kudos
L0k3_m0r
Enthusiast
Enthusiast

Hello Friend for that you can do the nic team of the doors in the virtual center and in its OS to the unification of network interfaces.
Reply
0 Kudos
depping
Leadership
Leadership

not possible in my opinion:

load balancing on virtual port id: a specific port is linked to a nic for each vm, so only 1Gb max

load balancing on ip has: port channeling for physical switch, but only balances on ip hash which means for hash A path 1 for hash B path 2. in other words not 2Gb at the same time, but first batch 1Gb and second 1Gb.

Duncan

My virtualisation blog:

Reply
0 Kudos
kukacz
Enthusiast
Enthusiast

Roneng,

most of your options depend on whether there are multiple network clients connecting to the VM or it is only one.

If there is only one - the worse scenario, then there is perhaps no balancing option in ESX. Still, however, you can use a 10Gbps connection.

If there are multiple clients and your ESX is setup for load balancing as you wrote, then you're done. Placing one vNIC into a vSwitch with multiple balanced uplinks should bring you the increased throughput.

--

Lukas Kubin

Reply
0 Kudos
Paul_Lalonde
Commander
Commander

Easy enough to do if you use the Intel e1000 virtual network adapter since Intel provides software support for NLB, SLB, and EC (Etherchannel).

To use the e1000 adapter with your virtual machine, you need to do either of the following:

1. Create a 64-bit virtual machine but install a 32-bit Windows OS

or

2. - Create a 32-bit Windows VM

- "Unregister" it from ESX / VC

- Edit the .vmx file and add the following lines:

ethernet0.virtualDev = "e1000"

ethernet1.virtualDev = "e1000"

- Re-register the virtual machine with ESX / VC

(You have to unregister and re-register the VM to add the e1000 driver or else ESX / VC will override it with the default vmxnet driver)

Once your VM is up and running and the OS sees the Intel Pro 1000 adapters, all you need to do is install the Intel Pro/1000 network driver set with Advanced Network Services (ANS). This will allow you to create an Etherchannel / 802.3ad bond within the virtual machine and take advantage of two adapters at once.

You can get this driver here:

http://downloadcenter.intel.com/download.aspx?url=/4275/a08/PRO2KXP.exe&DwnldId=4275&lang=eng

Hope this helps.

Paul

Reply
0 Kudos
jhanekom
Virtuoso
Virtuoso

Is there any real benefit over teaming inside the guest vs. teaming at the ESX layer? I can't think of any specific reason why it would be better to team inside the guest...

Reply
0 Kudos
rutgerb
Contributor
Contributor

Without teaming in the guest, are you not limited to 1GB of throughput to any particula VM? I have the same question which is performance based. I have the redundancy setup witin my ESX servers.

But I am concerned that the 1GB virtual NIC will be a bottleneck for my file servers.

Reply
0 Kudos
jhanekom
Virtuoso
Virtuoso

The short answer is no.

If you set up pure "source VM ID-based" load balancing on the ESX side, then - yes, you will be limited to 1Gb/s per VM.

If you set up a static 802.3ad / Etherchannel bond by using "ip-based" load balancing on the ESX side and appropriate configuration on the switch side, you can increase this throughput to 1Gb/s per src-ip/dst-ip stream. This means that the traffic between two computers will be limited to 1Gb/s, but that the total aggregate throughput available will be that of the number of network adapters available in the virtual switch. (This limit applies regardless of whether you team on the ESX host or in the VM.)

The "downside" to using ip-based load balancing is that it requires connection to a single physical switch (or a switch configured as a stack), possibly reducing redundancy and increasing complexity. Since the majority of implementations today (at these most of those I've been involved with) seem to not push the 1Gb/s limit during normal operation, the general recommendation from VMware is to rather go for a configuration that is simpler and has less of a chance for failure due to someone misconfiguring something somewhere: simple "source-VM ID" load balancing.

Also take a look at the following document, in particular pages 8 and 9: http://www.vmware.com/files/pdf/virtual_networking_concepts.pdf

Message was edited by: jhanekom

(added link to VMware document)

Reply
0 Kudos