I am seeing very poor performance on my VM's - I think
I have 5 Equallogic 3800's all in a group running raid 10, I then have an ESX 3.5 host with two HBA's, jumbo frames the entire 9 yards.
I have my LUNS carved up into 1 TB chunks
When I do things like install a server and things like that, the servers appear to run fast, when I fire up IOmeter my disk IO according to that sucks, example
I have a 4 year old clarion with 8 drives that are 10K rpm in raid 10, I get 1800 IOPS out of it with the 32K 100% read test, on my virtual machine that is attached to the 5 array equallogic san I get half that, my sata san gets half that again.
Now what I can't figure out is everything "appears" to be very fast so I can't tell whats going on.
I imported my sql server that is sitting on my clarion into the virtual environment and the performce was reduced by at least 4X measured by query times, however I suspect the import process may have caused issues that would impact the machines performace.
I'm just pretty much at a loss as to where to look to find my performace problems.A while back I played with setting up a server and doing a straight tie into my SAN without ESX just to play with it, the performance was pretty amazing so I suspect my performance issues are due to virtualization. Suggestions on where to look, tests to run?
oh and something else of note, I can fire up IOmeter on several VM's attached to the same SAN, for a given test if it maxes out at say 2700 IOPS I can fire up the same test on 4 other VM's attached to the same san, they all top out at the same with no reduction in iops as I fire up other tests so I know my SAN is not bottlenecking, if ti was, I shoudl have seen a drop in IOPs accross the board as I brought more tests online.
Also of note, when I switch the tests from sequential read to say 50% random the clarion falls behind and the VM is about double that of the clarion.
Sequential the clarion smokes my SAS Equallogic array (clarion physical, equallogic is virtual fs) but when I do random IO the equallogic eats the clarion for breakfast, could this be a problem with my HBA's?
nevermind figured it out
nevermind figured it out
Care to share with the community?
Ken Cline
Technical Director, Virtualization
VMware Communities User Moderator
1. San was FULL
2. importated DB server had BUS logic controller.
Thanks! That's what makes this forum so great - people share their experiences so that others can learn. Sometimes it's the little things that frequently get overlooked that make the most difference...
Ken Cline
Technical Director, Virtualization
VMware Communities User Moderator
So how does Buslogic figure into this? Is it slower than the LSI?
adam
I don't remember exaclty what the answer was but its something like bus is single threaded vs lsi is multi threaded or something along those lines, maybe someone that knows can answser for sure.
But I created a new VM with LSI and my issues went away
I'm curious how you like IOMeter, is it any good at all trying to use it for my CX380 storage to check out disk performance etc..but client doesn't have the budget and doesn't allow freeware opensource that's kinda dumb but its their call. Only tools I can use is built in storage health check and ESX health check that's limited.
If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!!!
Regards,
Stefan Nguyen
iGeek Systems LLC.
VMware, Citrix, Microsoft Consultant
I like it but I really only use it for comparisons for checking performance, for example how many IOPS can I squeeze out of RAID 10 vs 50 or SATA IOPS vs SAS etc.
I really don't use anything for testing sustained trasnfers as my shop doesn't do anything along those lines and when you get into SATA vs SAS etc for sustained seq IO its really pointless.
I am for maximum IOPS and IOMETER is great for comparing for "lab" style comparisons to see what a trade off between 50 and 10 will get, beyond that I look inside the app for performance with perfmon and other tools inside SQL. For me iometer testing is step one to make sure I can achieve the expected "lab" performance, if my disk subsystem is hitting good rates in iometer then I can look inside the apps for further benchmark/testing/tweaking.