Greetings,
Been using ESXi w/ Openfiler for a lab environment for over a yr and have recently set out to improve disk performance since adding a 4th WD RE4 to my Openfiler Adaptec 5805 and generally not being happy with performance on a per VM basis. Overall things run ok running upward of 20 lower demanding VMs at a time, however I'm wanting to know what ESXi is doing to "throttle" VMs disk performance, or so it seems, as my RAID5 and NICs got plenty of room past what a Windows VM is allowing.
Brief storage and test overview
Openfiler, 3 nics, seperate subnets, jumbo, write back, read cache, blockio, 4x 2TB SATA RAID5 256K stripe
ESXi, 3 nics + vmks, 3 paths active per lun RR, jumbo
Datastore in use for test 2TB 8MB block.
HDparm on Openfiler reads ~350MB read.
Test VM, win08 x64, 40GB vmdk
Guest disks tested thus far with IOmeter and HDtune. To keep things simple I've just been doing sequential read and/or write. VM on high gets ~45MB read (reviewing ESXi disk performance not VM test results) Openfiler iostat -d -x 5 3 shows 13% utilized during this test.
Now if I run multiple VMs with multiple HD tune tests the openfiler RAID and NICs become utilized and performance in my testing/reference VM remains ~40MB (not degrading much). My vmks will hit 70-75MB/s during multiple tests running. I can also run both HD tune and IOmeter 32k read at the same time in the test VM and almost double that VM's disk read to ~80MB/s.
Now why exactly doesn't a single HD tune and/or IO meter reach close to this level?
Could someone please explain to me what exactly ESXi is doing with the VM SCSI disk in this senario?
Thanks!
Check your storage I/O control settings, sounds like its keeping one VM from using more then it's share.
When you run both IOmeter and HDtune, the VM can reach double reads. So in IOmeter in the Disk Targets tab, what value did you enter for the number of outstanding I/O per disk? Can you try to increase this value?
I knew it had to be something simple.
I have just been using outstanding I/O 1. Going from 2,4,8 w/ 4 workers I'm getting 70MB, 110MB, 125MB. Much better results. Can't believe overlooked that one but I did!
So I take it then HDtune has no idea how to call upon the VM SCSI disk.
Do you have any suggestions for other benchmarking disk tooks that would work well inside VM?
I thank you for your assistance.
Hello,
Thank for your feedback. Glad I could help.
I usually use IO meter, it does a good job.
Have a look at this one for more info
http://communities.vmware.com/docs/DOC-3961
With kind regards,
Paul
Op 21 aug. 2011 om 00:29 heeft SayNo2HyperV <communities-emailer@vmware.com<mailto:communities-emailer@vmware.com>> het volgende geschreven:
VMware Communities<http://communities.vmware.com/index.jspa>
ESXi 4.1 Guest disk policies
reply from SayNo2HyperV<http://communities.vmware.com/people/SayNo2HyperV> in Performance & VMmark - View the full discussion<http://communities.vmware.com/message/1814318#1814318