VMware Cloud Community
buhle
Contributor
Contributor

Slow disk write performance (10MB/s?)

Hello,

I'm having some issues with disk performance after deciding to virtualize my home environment. The specs on my "server" are not great, but I am having troubles accepting that I can ONLY get a maximum of 10MB/s out of my disks. I looked around quite a bit on the forums, though, I have not found anyone else having the same issue, I can only assume it is something I have done.

Hardware:

Core i5

8GB of DDR3 RAM

Intel Pro PT dual port adapter

Both disks are 1TB 7200RPM seagate drives

dd if=/dev/urandom of=/etc/test.txt
C236330+0 records in
236330+0 records out
121000960 bytes (121 MB) copied, 12.8659 s, 9.4 MB/s

For fun, I also added a second physical drive and provisioned a disk from it to the VM, It had the same speed. VMware tools has been installed on the guest system. I gave the VM 4GB of memory and 4 cpu's.

On my other system which acts as a samba server (non-virt), I am able to get 100MB/s write with the same physical disks.

As I write the end of this post, I just had an openfiler VM push 100MB/s using iSCSI that is being hosted on ESXi. Is there a magic config file I have to edit in linux to get the disks to operate at normal speed?

9 Replies
firestartah
Virtuoso
Virtuoso

Hi

when you created the linux machine, what controller did you select for the disk drive/s?

Gregg

If you found this or other information useful, please consider awarding points for "Correct" or "Helpful". Gregg http://thesaffageek.co.uk
Reply
0 Kudos
buhle
Contributor
Contributor

Hi Gregg,

Thanks for your response. I am using LSI Parallel, though, I tried using LSI SAS and Bus Logic (which didn't work at all), I did not notice any difference in speed.

Reply
0 Kudos
cyclooctane
Enthusiast
Enthusiast

Hi buhle

Have you had a look at the CPU usage using top.

This is because udevrandom is farly CPU intensive on rapid writes and may be a bottleneck.

If this is the case, /dev/zero may be an option. (although it is not as thorough test)

Also try altering the block size of the test.

I found in testing that I got more realistic speeds with a 0.5 or 1mb setting for the DD block size.

In your example the command to do this would be

dd if=/dev/udevrandom of=/etc/test.txt bs=1M

This is because the standard block size for DD is 512 bites.

While the standard blocksize for VMFS 5 is 1mb.

This means that for every 512 bites that DD attempts to write to the disk. The ESXI host actually writes 1mb.

If you do the maths on this you will quickly discover that the overhead is several thousand %

Now in reality it is not quite that bad because the raid controler can cache and compensate for some of that, however it is still very significant.

But give a larger block size a go and see how it goes.

Regards

Cyclooctane

Ps. This is based on my understanding of the way file system block size works. Can someone correct me if I am wrong.

Reply
0 Kudos
buhle
Contributor
Contributor

Hi Cyclooctane,

Thanks for that information, it was indeed urandom causing that problem. My CPU's were maxed out.

dd if=/dev/zero of=/test2/test.txt

^C1647704+0 records in
1647704+0 records out
843624448 bytes (844 MB) copied, 5.1646 s, 163 MB/s

So this leaves me with a different problem, I have /test2 being present as a share via Samba. I can only get 10-12MB/s of transfer speed. I have tried using the various adapters such as E1000 and VMNET3 (VMNET2 didn't work at all), though they haven't made a difference. I thought that it might be the protocol, so I transfered a file using winscp which had even worse x-fer speeds: only 4MB/s.

Any thoughts on what may cause this?

Reply
0 Kudos
J1mbo
Virtuoso
Virtuoso

It's because the disk write cache is is either disabled by default, or is being actively disabled by ESXi.  Under 4.x, SATA performance really depended on the drive default as I guess ESXi didn't do anything with it, but I don't know if this has changed with v5 or not.

But either way, by far the best approach is to put in a RAID controller with battery-backed write-cache (search 'BBWC' on these forums for more info), even if it's only presenting a single disk (as a RAID-0 volume), but with write-back caching policy.  You can pick up Dell Perc 5i or Perc 6i RAID controllers on eBay for almost nothing, complete with the battery.  Of course you could add a second 1TB disk and run a mirror, which will offer significant performance benefits too.

Hope that helps.

cyclooctane
Enthusiast
Enthusiast

Hi buhle.

Sorry about the late reply I was away from computers all weekend Smiley Happy

Can you post your samba config file

(and if you can please redact IP addresses, usernames and passwords as this is a public forum).

in redhat based distos this is found in /etc/samba/smb.conf

If not have a look at the following forum posts, they may help you with your samba configuration.

http://forums.opensuse.org/english/get-technical-help-here/network-internet/434925-samba-server-spee...

http://ubuntuforums.org/showthread.php?t=1612842

http://www.samba.org/samba/docs/man/Samba-HOWTO-Collection/speed.html

Regards

Cyclooctane

Reply
0 Kudos
buhle
Contributor
Contributor

Message was edited by: buhle

Reply
0 Kudos
cyclooctane
Enthusiast
Enthusiast

Possible ideas for changes.

Change the line

socket options = TCP_NODELAY

to

socket options = TCP_NODELAY IPTOS_LOWDELAY SO_SNDBUF=65536 SO_RCVBUF=65536

And see how that goes.

This will change the TCP buffer settings that samba uses (you will need to restart samba after applying this)

In some cases this will result in a preformance boost. In others it does not, it depends on where the bottleneck is.

Also checking out a battery backed raid controler is a good idea if you do not already have one.

Regards

Cyclooctane

Reply
0 Kudos
Konan_RUS
Contributor
Contributor

For me it helps -

read raw = no

write raw = no

Reply
0 Kudos