I'm having problems with ESXi (3.5 U2 latest, both embedded and installable) on three different hosts. Hardware is HP DL380 G5. Both NICs on every server are connected to 1000FDX ports without any duplex issues. ESXi network configuration is the default: both vmnic0 and vmnic1 are used for VM Network and Management Network. Switches show no errors on the ports.
VM Network is not showing any performance problems. I'm getting steady 30-40MB/s to and from guest machines.
Accessing the management network (copying to datastore, converter access, downloading VI client etc.) is painfully slow. Ranging between 100kB/s to 3MB/s - usually around 1MB/s.. needless to say that this is very frustrating when for example converting existing virtual machines to the ESXi hosts.
Any idea where to start looking for a solution?
Anything about the equipment doing the management. I had a situation where my management network actually showed to ESXi as 100MB rather than GB. (physical switch problem)
If this isn't production can you simplify. As Sherlock Holmes did "eliminate the things that it wasn't" leaving only the "thing that it was". It is too easy to have things like multiple routes to the host.
The management network isn't restricted. It is the network that has access to remote storage; NFS, iSCSI and SAN. Make sure you don't have multiple management ports on the same subnets.
I'm having the same issue with slow performance.
I've tried two different servers with the same result. I get a maximum of 3MB/s during, Converter, SCP and datastore upload.
Any solution to this problem yet?
Same here. I have tried it on all versions of ESX 3.x and the same results. I was able to get 10 - 11 MB with version ESX 2.5.2. I am sure they did something to limit the speed when copying using scp
4MB p/s is still pritty crap.
I'll install ESXi 3.5 tonight on a test box at home and see what speed I can get out of it.
I have the same problem. I think it is by design. I get between 1MB/s and 10MB/s using convertor. When I transfer files using datastore browser or SCP it is dog slow. I have raised this with VMWare as a support request but they didn't flag it as a known issue. I have only used ESXi but have had the same issue on all servers.
Indeed this is a major problem. I also recognized the speed of 1MB/s even using the embedded scp
and copying files inside an IBM BladeCenter H from one box to another.
Is there a general advice how to configure the management port & policy to get a better throughput?
With a mounted NFS datastore (async) I can sustain about 405Mb/s when cp'ing from the local datastore. (6 drive SAS Raid-5 -> 8 drive SATA Raid-5)
This is over a single 1Gb/s nic @ 1500mtu. It was the biggest problem with ESXi that I found was that backups were horribly slow. I've tried everything from SCP (too slow), VI (I/O Error on large files), Converter (stalling, too slow). ISCSI was pretty slow too compared to NFS. I have run extensive benchmarks on various drive options, different raid configurations to figure out where the problem is. Turns out that all that was needed was to enable (async) on the NFS export. This works great for backups but I wouldn't recommend it for VM's as there's a high change of data-loss if the NFS server goes down for whatever reason.