VMware Cloud Community
gi-minni
Contributor
Contributor

Performance bottleneck with ESXi U3

I tested a scp mass SATA diskcopy of VMware images (each of them has 40GB) between blades within an IBM BladeCenter H with a ratio of 1:3 (reading:writing blades)

I watched the performance graph expecially the disk throughput. This vary from 7.2 MBps to 17.2 MBps.

If I look at the network throughput this value is nearly the same on all blades (4.5 MBps). Please consider that every blade are the same and on every blade i installed the same esxi kernel.

My question now is why can't I have higher disk throughput values beeing on the midplane (inbound) of a BladeCenter and are there some BIOS/disk/kernel config parms to tweak?

0 Kudos
3 Replies
Ken_Cline
Champion
Champion

Are you performing the copies from within the service console?

What tool are you using to do the copies?

Ken Cline

Technical Director, Virtualization

Wells Landers

TVAR Solutions, A Wells Landers Group Company

VMware Communities User Moderator

Ken Cline VMware vExpert 2009 VMware Communities User Moderator Blogging at: http://KensVirtualReality.wordpress.com/
0 Kudos
gi-minni
Contributor
Contributor

Hi Ken,

I am using ESXi Build 123629 and I perform the copies native from the command line

using the shipped dropbear client, because I guess I get the fastest speed in term

of network disruption, latencies if I use inbound copies inside the midplane BladeCenter itself

from ESXi to ESXi. Be aware my workstation blades has no SAS/FC adapter nor can I add this

later.

The weird thing is that after a while when the copy of some blades ends, the speed (disk/net)

will not ramp up, even if the bandwith inside the midplane of the BladeCenter could permits.

The values remains more or less the same. Frankly this is a complex task to isolate this

performance problem. Again the first copy works with 17,2 MBps the last one with 7,2 MBps

(disk write bandwith) and the net bandwith is always 4.5 MBps.

Let me say If I need to copy 100GB or even more to 13 blades disk/net bandwith becomes critical.

Is it ESXi itself, the network, the disk, the underlying blades or the BladeCenter itself causing the problem?

For myself I will update first of all the firmware (Blades/BladeCenter) and then we'll see.

I read here in the forum to switch off hyperthreading inside ESXi if the processor does not use it.

Is this correct? Are there other flags to set that I must be aware of (scheduler,resources,disk,net) ?

0 Kudos
ltyp
Contributor
Contributor

It seems that it's ESXi itself. I'm experiencing exactly the same problem (~ 5 MB/s - network, and low HDD speed), thus I'm not using blade server.

I've tried other virtualization solutions they all outperform ESXi in I/O speed.

0 Kudos