VMware Cloud Community
technobuddha
Contributor
Contributor
Jump to solution

bottleneck in basic network config vsphere 5.0 standalone?

Hi all,

I've been running vsphere 5.0 on a single host.

We don't have any of the "addons".

I noticed when I did a backup (first time), on our fileserver,

I saw only vmnic2 shoot up, and the viritual server was very slow to copy files while there is a backup.

none of the other nics had any network activity.

There isn't any traffic YET on any of the other virtual servers.

is there anything I can do aliviate this?

I will soon be having aother network intensive viritual server added,

and from what i can see, if they both are backing up at the same time,

nothing is going to work, because everything is going to go through one of the nics.

should I create seperate switches for each of the "network intensive" VS?

so that they can have a dedicated network adapter?

Picture 10.png

0 Kudos
1 Solution

Accepted Solutions
rickardnobel
Champion
Champion
Jump to solution

technobuddha wrote:

should I create seperate switches for each of the "network intensive" VS?

so that they can have a dedicated network adapter?

What kind of performance did you get when doing your backups? It might be that the CPU in the VM is overloaded which causes the slow file copy.

However, if you do see that the NIC is overloaded you could do some tweaking with this, like creating a second vNIC for the VM, with a own backup network / IP subnet, putting that on a separate portgroup and manipulate the active/standby vmnics for each portgroup. This is quite easy, but still some configuration.

My VMware blog: www.rickardnobel.se

View solution in original post

0 Kudos
8 Replies
weinstein5
Immortal
Immortal
Jump to solution

Welcome to the Community - I am assuming you kept the default setting for the NIC Teaming at Virtual Port ID - what this means is an out bound NIC is selected based on the virtual port id that is assigned to the vitrual NIC so you can see the traffic from the VM will only go out a single physical NIC - the virtual port ID is assigned as each VM/virtual NIC is brought on line - so depnding on how the VMs comeo one line then thetraffic from the two VMs will go out diffferent physical NICs,

The other option would be to change the NIC teaming policy to to IP Hash - with this method an outgoing physical NIC is selected based on the originating IP address and the destination IP address - so you can see this only effective if the are multiple destination IP addresses because if there is only a single destination IP address the traffic will only go out a single NIC - Another cavet for Route based on IP Hash is that physical will need to support 802.3ad link aggregation -

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful
rickardnobel
Champion
Champion
Jump to solution

technobuddha wrote:

should I create seperate switches for each of the "network intensive" VS?

so that they can have a dedicated network adapter?

What kind of performance did you get when doing your backups? It might be that the CPU in the VM is overloaded which causes the slow file copy.

However, if you do see that the NIC is overloaded you could do some tweaking with this, like creating a second vNIC for the VM, with a own backup network / IP subnet, putting that on a separate portgroup and manipulate the active/standby vmnics for each portgroup. This is quite easy, but still some configuration.

My VMware blog: www.rickardnobel.se
0 Kudos
technobuddha
Contributor
Contributor
Jump to solution

thanks for your suggestion.

0 Kudos
technobuddha
Contributor
Contributor
Jump to solution

makes sense.

I could increase the CPU see how that responds.

I also have 4 physical nics on my HP (host),

so I can also setup a dedicated nic for backups of all my VM's.

thanks for your suggestions!

0 Kudos
jose_maria_gonz
Virtuoso
Virtuoso
Jump to solution

Hi there,

Can you be more specific (MB/sec) the performance you are getting during your backups?

It might be as well that the CPUs in your ESX/ESXi are overloaded which might causes a slow file copy.

I hope I have helped you out

My Company: http://www.jmgvirtualconsulting.com

My Blog: http://www.josemariagonzalez.es

My Web TV show: http://www.virtualizacion.tv

My linkedin: http://es.linkedin.com/in/jmgvirtualconsulting

My Twitter: http://twitter.com/jose_m_gonzalez

0 Kudos
VirtualDabbler
Contributor
Contributor
Jump to solution

Changing the question just slightly...

I know that IP hash based teaming will give us the 3GBps aggregate bandwidth if I have dozens/hundreds of hosts hitting the server/VM. But, let's say that we are doing a file copy/backup between a VM on one server and a VM on another server. IP hash based teaming will only provide us with 1Gbps between the two VMs. I don't see a way around this, no matter how many NICs and vNIcs are in play.

Is there any way; using some other networking configuration, specialized copy/backup software, IP addressing/routing tricks to make the backup actually utilize the 3Gbps potential? It seems that it should be possible, but I don't see any way to get away from VM host to VM host, on another virtual server, going through one and only one physical NIC.

Now, returning to the original poster's question...

You may wish to consider 10Gbps NICs, at least in your VMware server(s) and your backup machine. A directly cabled link between the two machines will afford you lots of bandwidth for your backups without having to spend really big money on a 10Gbps switch.

0 Kudos
rickardnobel
Champion
Champion
Jump to solution

VirtualDabbler wrote:

Is there any way; using some other networking configuration, specialized copy/backup software, IP addressing/routing tricks to make the backup actually utilize the 3Gbps potential? It seems that it should be possible, but I don't see any way to get away from VM host to VM host, on another virtual server, going through one and only one physical NIC.

A possible way could be to connect for example three vNICs to a VM, attach them to three different portgroups, each tied to one physical VMNIC, and inside the VM set three different IP addresses. If you then have a backup tools that could simultaneous connect to several different IP addresses at the same host  - then you could have 3 Gbit/s of throughput.

That is just a theory of course, but it could work. Smiley Happy

My VMware blog: www.rickardnobel.se
0 Kudos
ealaqqad
Enthusiast
Enthusiast
Jump to solution

Well, if you are planning to scale this up or if this is a test bed for how ESXi perform under heavy network load then you might want to consider testing LBT, though that only work with the distributed switch and require the Ent+ license. To find out more about Load Based Team (LBT) you can check out:

Why VMware Load Based Teaming (LBT)

Hope this help.

Enjoy,

Eiad Al-Aqqad

B: http://www.VirtualizationTeam.com

B: http://www.TSMGuru.com

Regards, Eiad Al-Aqqad Technology Consultant @ VMware b: http://www.VirtualizationTeam.com b: http://www.TSMGuru.com
0 Kudos