VMware Cloud Community
heikki_m
Contributor
Contributor

ESXi 3.5 management network very slow

Hello,

I'm having problems with ESXi (3.5 U2 latest, both embedded and installable) on three different hosts. Hardware is HP DL380 G5. Both NICs on every server are connected to 1000FDX ports without any duplex issues. ESXi network configuration is the default: both vmnic0 and vmnic1 are used for VM Network and Management Network. Switches show no errors on the ports.

VM Network is not showing any performance problems. I'm getting steady 30-40MB/s to and from guest machines.

Accessing the management network (copying to datastore, converter access, downloading VI client etc.) is painfully slow. Ranging between 100kB/s to 3MB/s - usually around 1MB/s.. needless to say that this is very frustrating when for example converting existing virtual machines to the ESXi hosts.

Any idea where to start looking for a solution?

Tags (3)
Reply
0 Kudos
107 Replies
giulianozo
Contributor
Contributor

@[KBuchanan:|~617442]

Hello,

can you post the part of your script that transfer the files ? I use the same server and a viperl script to copy the files but with about 3MB/sec.

I use

/usr/bin/vifs --server $server --username $username --password $password --get \"$DS/$DSPathVM/$filename\" \"$DestPath/$filename\"

thanks

giuliano

Reply
0 Kudos
Elwappo
Contributor
Contributor

Just an FYI here from another ESXi user. I have learned that the machine in the middle, the one you are using to run the converter on or copying datastore to datastore, plays some part in how fast things go. I used to use a laptop in my datacenter to perform these operations but use a desktop now, something that doesn't have one of these celeron style cpu's, to do all my transfers between datastores and conversions of VM's. Even though my laptop has a Gb connection to the host it still equalled the suck when trying to convert and copy files. The moment I plugged in my quad core destop I saw a 100 percent improvement in moving files and converting machines to the host.

I can only assume the machine running the client or the converter must do more than simply open a connection between ESX hosts and machines you are converting.

Reply
0 Kudos
KBuchanan
Enthusiast
Enthusiast

FYI...our "machine in the middle" that host the script files and runs the schedule clones/moves is a "beefed-up server"...not sure of the performance specs, but it is definitely more than a desktop/laptop configuration!

Reply
0 Kudos
Elwappo
Contributor
Contributor

Sounds like you should be golden there then. You aren't by chance using Broadcom adapters in the middle machine are you?

Reply
0 Kudos
KBuchanan
Enthusiast
Enthusiast

Here is a link to the files. I create a new thread. I have been asked for these files, so this helps me keep it a nice and clean posting!

http://communities.vmware.com/thread/193856

Reply
0 Kudos
dragin33
Contributor
Contributor

OK, running the latest version of the vmware software available I am back to thinking this is a problem with my external storage. I had been doing testing both on a windows server in the datacenter and on a good PC. However recently I had been testing on the PC. Today I got much better results by using the windows server to transfer to ESXi using the Infrastructure client. However I am still having major speed issues with my external storage

When transferring to ESXi host A using it's ONBOARD disks I got 180-200Mb/s transfer speed (over gigabit)

When transferring to ESXi host A using it's POWERVAULT disks I got < 10Mb/s (over gigabit)

When transferring to ESXi host B using it's ONBOARD disk disks I got 2000-250Mb/s xfer speed (over gigabit)

When transferring to ESXi host B using it's POWERVAULT disks I got <10Mb/s (over gigabit)

ESXi Host A:

Dell PowerEdge 1850

1x 3GHz CPU 4GB Ram

External Stoarge uses a PERC 3DC with Powervault 200

ESXi Host B:

Dell PowerEdge 1850

1x 3GHz CPU 4GB Ram

External Stoarge uses a PERC 3DC with Powervault 220

Reply
0 Kudos
KBuchanan
Enthusiast
Enthusiast

dragin33: are you getting 200 Megabyte or Megabit per sec? I'm assume that is Megabit/sec. The maximum speed of a gigabit circuit is ~125 Megabyte/sec (1 gigabit/sec divided by 8 = 125 Megabit/sec).

Reply
0 Kudos
dragin33
Contributor
Contributor

Yes bit.. In other words I was getting 20-25% of my 1gigabit connection which is a normal xfer speed on that network.

Reply
0 Kudos
KBuchanan
Enthusiast
Enthusiast

My speed improved from 30Mbyte...to 50MByte for forcing the NICs on the ESXi host, NFS Host, and network switches to 1000/Full-Duplex.

Reply
0 Kudos
_David
Enthusiast
Enthusiast

You can switch vmnic on the service console port group to see if there are any performance changes with a diffrent nic. That way you can see if its something wrong with the nic.

If you found this or any other answer useful please consider the use of the Helpful or correct buttons to award points

If you found this or any other answer useful please consider the use of the Helpful or correct buttons to award points
Reply
0 Kudos
dragin33
Contributor
Contributor

Alright I got the powervault to perform 12% of the gigabit (120Mbit/s) by turning on WRITE-BACK instead of WRITE-THROUGH on the PERC. I know this is not recommended because if the server gets shut off it won't have time to write the cache but could this corrupt a whole virtual machine? and what other option do I have? If I use WRITE-THROUGH it's so slow that I cannot load my VM onto it!

Reply
0 Kudos
CrewOne
Contributor
Contributor

I had a similar problem.

I have two DL360 G5's. Both with 5x 146GB in raid-5. One of the machines is from 2006, the other from 2008. The machine from 2006 had a terrible slow management network. Could copy files to it with 3-5 megabyte/sec. The other machine got 40-50 megabyte/sec, which is acceptable over a gigabit link.

Both machines are updated with the latest HP Proliant firmware CD, version 8.40.

It turns out the BBWC module makes all the difference. The 2008 machine has a 512Mb BBWC module, whereas the 2006 machine has a single 256Mb cache module without battery. When I transferred the module from the 2008 machine to the 2006 machine, the 2008 machine had the same slow management network speed wheras the previously slow 2006 machine could copy files with 40-50 megabyte per second.

It is not clear to me why it is so slow without BBWC, but I'm glad I figured it out.

Reply
0 Kudos
lers
Contributor
Contributor

Using WinSCP, I'd usually copy from the ESXi box to another PC around 10MB/sec. I changed encryption options under SSH to list BLOWFISH FIRST (move it up above AES). Just this one change increase my transfer to 18-20MB/sec.

WinSCP 4.1.8 (DO NOT USE BETA.. some problems with it. SCPs timeout for no reason) http://winscp.net/eng/download.php

Create Session, SSH->Encryption Options, click blowfish, click up.

Save

Login

both PCs are gigabit NICs.

Reply
0 Kudos
giulianozo
Contributor
Contributor

Hello,

I've added a bbwc module (512mb) to the hp dl360 g5 and I've got these results:

before/after

copy from local ds to nfs share over gigabit: 3Mbps/3Mbps (500mb file)

copy from nfs share to local ds over gigabit: 3Mbps/50Mbps (500mb file)

copy from local ds to nfs share over gigabit: 3Mbps/3Mbps (1000mb file)

copy from nfs share to local ds over gigabit: 3Mbps/30-50Mbps (1000mb file)

the nfs share is on a 7200rpm sata2 disk with the following hdparm result:

giuliano

Reply
0 Kudos
jcasares
Contributor
Contributor

In my case the problem disappeared mysteriously. I can't pinpoint any reason.

Reply
0 Kudos
KBuchanan
Enthusiast
Enthusiast

We are running VM images locally and cloning to 2 NFS datastores: a NS20 Celerra and a HP DL 320 G5 Openfiler (v2.3)

All connections are 1G ethernet.

NS20 has 146G 15k drives.

Openfiler has 400G 10k drives.

VM can make clones to Celerra at an avg of 18.1MB/sec

VM can make clones to Openfiler at 15.5MB/sec

Neither is very impressive...but I contribute the performance "bottleneck" to the VM Mgt interface. I can copy from the Openfiler to Celerra at about 60MB/sec.

But...if VMware didn't constrain the mgt interface, it would negatively impact the overall performance of the VM host.

Kevin Buchanan

Chief Information Officer

Lexington Memorial Hospital

336-238-4286

kbuchanan@lexmem.org

Reply
0 Kudos
jcasares
Contributor
Contributor

post edited

Stupid forum software that processes out of office replies. -_-;

Reply
0 Kudos
neurosis89
Contributor
Contributor

Hello,

I'm running Veeam FastSCP 3.0 to make copy of file to 2 ESXi datastore.

When I copy file between datastore, speed transfert is above 4MB/s.

With the same product if I transfert file beetween VM1 (on ESXi 1) to datastore of ESXi 2, speed transfert increase to 30MB/s.

I thinks there are a problem in negociation on ESXi with the management interface.

I see in VMware Communities other post related this problem but the solution proposed to resolve issue is for ESX and not for ESXi. (http://communities.vmware.com/thread/189267)

Thanks for your comment.

Reply
0 Kudos
admin
Immortal
Immortal

The technical reasons why SCP is slow in ESXi was explained by RenaudL in a post about copying vmdks.

lance

Reply
0 Kudos
KBuchanan
Enthusiast
Enthusiast

I searched, but didn't find the article by RenaudL. Do happen to have it handy?

Kevin

Reply
0 Kudos