VMware Cloud Community
heikki_m
Contributor
Contributor

ESXi 3.5 management network very slow

Hello,

I'm having problems with ESXi (3.5 U2 latest, both embedded and installable) on three different hosts. Hardware is HP DL380 G5. Both NICs on every server are connected to 1000FDX ports without any duplex issues. ESXi network configuration is the default: both vmnic0 and vmnic1 are used for VM Network and Management Network. Switches show no errors on the ports.

VM Network is not showing any performance problems. I'm getting steady 30-40MB/s to and from guest machines.

Accessing the management network (copying to datastore, converter access, downloading VI client etc.) is painfully slow. Ranging between 100kB/s to 3MB/s - usually around 1MB/s.. needless to say that this is very frustrating when for example converting existing virtual machines to the ESXi hosts.

Any idea where to start looking for a solution?

Tags (3)
Reply
0 Kudos
107 Replies
Curry001
Contributor
Contributor

Hi,

i'm having exactly the same problem. The hard is a HP DL380G5. Sending to the datastore or to a virtual machine throughput is limited to 1-4 MB/s but download to my machine seems to be limited to my laptop disk (ie 35 MB/s). I've tried to separate management console from virtual machines vswitch but the problem doesn't seem to be solved. At home... with poor home hardware I'm not having this issue. Could it come from HP hardware? Any one has an answer for that?

Thx to the community!

Reply
0 Kudos
boehmenkircher
Contributor
Contributor

Hi,

I have the same Problem an Dell Hardware with VMWare ESXi. Copying about 200 GB, from the vmfs to a lokal Disk with the vifs.pl command, take about 3 days Smiley Sad

Has anybody an idea how to reduce that time.

THX

Reply
0 Kudos
OxyA
Contributor
Contributor

hello

same here

I saw some threads about this but can't find a solution:

"

The console is basically a VM itself, and you can adjust its resources

much like a VM. Have you increased resources to the console? They are

set low by default because most of the processing is normally consumed

by the VMs. Increased console usage needs increased resources.

"

from this thread: SFTP speed to our X345 server slow!

but I don't how to do it.

Any solution?

Reply
0 Kudos
TechFan
Contributor
Contributor

I think that info only would apply to ESX (not i). At least I have never heard of ESXi having a real console with configurable resources. It definitely seems capped somehow. Once I figure out how to replicate this and verify, I'll have to turn in a case since we do have support on ESXi.

Reply
0 Kudos
Curry101
Contributor
Contributor

Just for the fun, has anybody achieved higer speeds than comented out here, not from or to VM's, but from or to the Management Console? Why does VmWare team say nothing about it?

Reply
0 Kudos
Curry101
Contributor
Contributor

Ok... I've tested ESXi over a IBM x3650 and I achieve 40 MB/s from Datastore to my Desktop so the problem could be in HP hardware again.

What could happend to the HP DL380G5?

Reply
0 Kudos
TockerNZ
Contributor
Contributor

Hi There, I seemed to have solved this issue myself on our ML350 with ESXi3.5 - putting the machine into maintenance mode appears to speed up the transfers 10 fold.

Give it a go.

Reply
0 Kudos
dragin33
Contributor
Contributor

I have been having this issue too. I have an (recently unpatched) server we'll call VM09. VM09 is transferring at 120MB/s over our gigabit (this is somewhat normal.) VM08 howerver is only transferring at 10MB/s over the same network. I have tried entering maintenance mode as suggested but that didn't do anything. The only difference I can see is that VM08 has been patched more recently.

Here are the Version #s:

VM08 (ESX Server 3i 3.5.0, 123629) - Sucks

VM09 (ESX Server 3i 3.5.0, 110271) - OK

Reply
0 Kudos
TechFan
Contributor
Contributor

Wow. Anyone know of a way to downgrade versions without losing data stores?

Reply
0 Kudos
Dave_Mishchenko
Immortal
Immortal

You could use vicfg-cfgbackup.pl from the RCLI to backup your config and then run a repair install with the older install CD.

Reply
0 Kudos
koit
Contributor
Contributor

Hi

Does your hardware have a battery backup unit (BBU) on the RAID controller.

I think having one increases performance about 10 times

-Bernt

Reply
0 Kudos
dragin33
Contributor
Contributor

After some more investigation I think my problem was the Perc 3 Card in the system and/or the powervault 200 I have. Copying the same file to the onboard disk instead of the powervault in Maintanence mode was quick. Something in the perc and/or powervault is me.

Reply
0 Kudos
KBuchanan
Enthusiast
Enthusiast

Hardware and Software config: ESXi Server 3.5.0 Build 123629 on a HP ML350 G5. The management network is running on the built-in NIC (Broadcom NetXtreme BCM5703) with is running at 1G and connected to a Cisco 3750 Gig switch. I have 2 local volumes - each with (3) serial scsi 15k disks.

Here are the results of my testing

Local copy between 2 volumes - using vmkfstools.pl

I can copy a 27,729,642,496 Mbyte file in 7 mins (or about 66Mbyte per second).

Copying Local to NFS mount - using vmkfstools.pl

I get 1-3Mbytes per second (varies)

Using pscp:

When I try to copy the same file over the management link, I get about 6.6Mbyte per second.

VEEAM's FastSCP

The RTM isn't supported on ESXi, and I haven't tried the Beta. But reports are that it will d/l @ 25Mbytes per second

My opinion:

The Management interface has been purposefully capped by VMWare. I believe it is a means of limiting the ESXi system "as is"; i.e., in order to get better performance, you need to upgrade to additional software/licensing/etc... Although, I am not aware of any licensed ESXi SKU that will unlock/remove the cap on the mgt interface. I don't believe it is VMWare's intented purpose of the management interface.

Besides - VMWare is owned by EMC...and EMC has PLENTY of solutions for protecting VM images on their SAN equipment!! So - big deal if it doesn't work on the management interface - why should they care? They don't want you to protect it by copying the VM image over that interface anyhow! ...buy their SAN storage and all the multitude of SAN-related software to protect the VM images.

Disclaimer:

We have EMC storage (although we are in the early phase of migrating Phy-to-Virt)...so, I stand on firm ground when I speak of the cost of ownership regarding EMC licensing relative to Backup/Cloning/Sync-Async-Replication/etc...

Bottom line (again...just my opinion/observation):

VMWare has capped the performance of the management interface. It isn't a hardware configuration problem, unless their drivers for HP servers is "ill-optimized" (ie - buggy).

Reply
0 Kudos
Tangata
Contributor
Contributor

New ESXi installation on Supermicro X7DCL motherboard with Adaptec 3805 Raid controller. Works great, but have been unsuccessfully trying to export two 80GB guests for two days. Finally succeeded tonight. I ran a straight Cat6 cable between the ESXi server and my backup workstation, and forced the onboard Intel Pro/1000 board (in the workstation) to operate at 100MBs half duplex. It was the half duplex setting that appears to have done the trick - I have no idea why. The export was still unacceptably slow (I suspect KBuchanan's observation is spot on), but at least it went to completion. I would be very interested in anyone else's results if they try this.

Lou

P.S. Maintenance mode did nothing for me.

Reply
0 Kudos
duonglt
Contributor
Contributor

Hi KBuchanan,

Sorry to hear about your performance issues, our results show otherwise on your conclusion:

ESXi Server 3.5.0 Build 123629 on HP BL460C (Licensed with Serial Number)

Management network running on built-in NIC: Broadcom NetXtreme II BCM5708 1000BASE-X (1gb connection w/ Cisco Catalyst Blade 3020 switch)

Local Volume: 2 x Seagate Savvio 10k.2 143gb (RAID1)

NFSServer1: Dell Poweredge 1850 ( 1 x Fujitsu MAXseries U320 15k ) single copper gigE link (5 switch hops to BL460C, combination fiber (1.5 miles) and cat5e copper links)

NFSServer2: HP DL320 G2 ( 2 x Seagate 7200.11 in RAID1 ) single copper gigE link (2 switch hops to BL460C, cat5e copper links)

Backup Script: ghettoVCB.sh from BL460C ESXi Service Console

VMDKs to backup:

1) 8 gig (powered Off blank, no OS install) located on BL460C Local Volume

2) 8 gig (powered On Windows XP installation) located on NFSServer2

3) 8 gig (powered Off blank, no OS install) located on NFSServer2

All NFS exports (2) have been mounted as datastores (one from NFSServer1 and the other from NFSServer2) on the BL460C. All operations and performance results originate from the BL460C. Performance results are obtained from the "Performance" tab in VI Client.

Experimental Details:

1) Backup VMDK (1) to NFSServer1: 49Mbyte/sec (send), 1 Mbyte/sec (receive)....50 Mbyte/sec (total transfer rate), 2.98 minutes total backup time

2) Backup VMDK (2) to NFSServer1: 28Mbyte/sec (send), 28 Mbyte/sec (receive)....56 Mbyte/sec (total transfer rate) , 5.67 minutes total backup time

3) Backup VMDK (3) to NFSServer1: 33Mbyte/sec (send), 34 Mbyte/sec (receive)....67 Mbyte/sec (total transfer rate), 4.90 minutes total backup time

There may be an issue with your NFS server settings. The only time when we saw really slow NFS performance was when we ran the NFS server on a VM; switching it to a physical server made a big difference.

VMware ESX 3.x and ESXi Scripts & Resources: http://www.engr.ucsb.edu/~duonglt/vmware
Reply
0 Kudos
duonglt
Contributor
Contributor

We took it a step further just for fun:

Converted NFSServer1 (Dell Poweredge 1850) to an iSCSI target with the same Fujitsu U320 15K disk. Running IET Enterprise iSCSI Target Software 0.4.16. Same network topography and settings, just the different protocol.

BL460C is using software iSCSI off of the same management interface as the previous experiment.

VMDKs to backup:

1) 8 gig (powered Off blank, no OS install) located on BL460C Local Volume

2) 8 gig (powered On Windows XP installation) located on NFSServer2

Experimental Details:

1) Backup VMDK (1) to NFSServer1 (software iSCSI): 79Mbyte/sec (send), 1 Mbyte/sec (receive)..... 80 Mbyte/sec (total transfer rate), 2.00 minutes total backup time

2) Backup VMDK (2) to NFSServer1 (software iSCSI): 33Mbyte/sec (send), 33 Mbyte/sec (receive)....66 Mbyte/sec (total transfer rate) , 5.12 minutes total backup time

VMware ESX 3.x and ESXi Scripts & Resources: http://www.engr.ucsb.edu/~duonglt/vmware
Reply
0 Kudos
KBuchanan
Enthusiast
Enthusiast

duonglt: Consider yourself VERY fortunate! I'm quite impressed and would like to duplicate your results!!

I just finished a quick test run of FastSCP - the beta version (3.0.0.211) works with ESXi. It provided a sustained throughput of 30Mbytes/sec. A 37.25GByte VM Image took 22 minutes to copy over a 1G connection.

I'll have to see if I can move the target server to the same switch to see if that helps. Right now, there are several "hops" between the source and target (Cisco 3750, Cisco 6509, Nortel 5510).

Here is a network graph from MS Task manager. This is the best performance I have been able to squeeze out. duonglt, you seem to be sitting on a gold nugget - because I have read SO MANY forum posts about the performance (or lack thereof).

duonglt: What version of ESXi are you using? I read somewhere that a previous version had better performance over the management interface, but it was in another forum/posting. I am running the current release of ESXi. (Edited by KBuchanan) Never mind - I see you are running the same (current) release, also.

Reply
0 Kudos
duonglt
Contributor
Contributor

Hi KBuchanan,

As was noted above, the BL460C is running Build 123629 (ESXi 3.5u3). The numbers have been duplicated on a separate BL460C.

What are you using for your NFS Server? When we tried using Windows as the NFS server (Windows Services for Unix) we got <5 Mbyte/sec transfer from the same BL460C. The Windows NFS Server was a NAS appliance bought straight from HP running Windows Storage Server 2003. Similar results (<5 Mbyte/sec) was also obtained when running a Linux NFS server in a virtual machine.

VMware ESX 3.x and ESXi Scripts & Resources: http://www.engr.ucsb.edu/~duonglt/vmware
Reply
0 Kudos
KBuchanan
Enthusiast
Enthusiast

duonglt - sorry for the versioning clarification. After I posted, I seen you had already included the versioning in your earlier response.

We have tried MS SFU on physical and virtual platform and there wasn't a significant difference in speed/performance. The physical server had a throughput of 3.2-3.5MByte/sec and the virtual had a throughput of 2.8-3.1MByte/sec. I only tried the virtual "for kicks"...just to see if it would be any different. Although there is a difference, it isn't a practical difference. I could transfer files from a XP workstation to both the physical and virtual NFS servers and saturate the NIC...so, I know the phyiscal and virtual NFS server could handle the transfer at much higher speeds. The common thread is the ESXi interface...I am confident this performance (lack thereof) is a type of safety feature for sake of host management.

If I had to guess, I believe that VMWare capped the performance - after all, it is the "management interface" and you should be able to access the system all the time and any time - you wouldn't want a file transfer to effectively create a "denial of service"!!

I tried to setup another VMKernal network/switch...that was a near disaster. I used an IP address from my main network...and without thinking about the possible routing problem, I was unable to access the main network afterwards. Which was a BAD thing - because none of the VM could access it either. I had been working remotely and had to physically drive to the phsical server site and connect the VI Client to the host on the management network!! ...never again. I should have VLAN'ed the other VMKernal network on the physical switch topology.

Reply
0 Kudos
giulianozo
Contributor
Contributor

hello,

same problem here on an hp dl360g5 server (single quad core e5430, 10gb ram, 5 sas 15k disks in raid 5, latest esxi).

Is there an alternative to fastscp that runs on linx from command line ?

I'll try VIMA and check if the transfer speed is still so slow

giuliano

Reply
0 Kudos