VMware Cloud Community
james411
Enthusiast
Enthusiast

Poor IO performance under ESXi compared to physical machine

Hi,

Hoping someone might be able to help me. I've googled and tried lots of troubleshooting but can't seem to come up with a solid answer.

I am repurposing an old server for use as a second ESXi host. The host has the following specs:

2x Intel 2.2GHz

48GB RAM

8x 7200rpm Enterprise SATA HDD in RAID 10

The storage controller is a LSI 9260-8i. I've flashed it to the latest firmware 12.15.0.205.

I wanted to do some stress testing of the storage system before installing ESXi so I installed Windows 2012 onto the array and ran a battery of SQLIO tests. I also ran Crystal Disk benchmark.

I measured the following maximums from my tests:

SQLIO PatternThroughput
Sequential read654 MB/s
Sequential write394 MB/s
Random Read129 MB/s
Random Write83 MB/s

I then installed ESXi 5.5U3 to a USB flash drive and formatted the RAID 10 array as a datastore. I installed Windows 2012 as a guest VM, gave it 48 Gb of RAM and 8 CPUs to mimic the physical hosts specs. I installed the latest LSI driver (6.605) for ESXi. I then reran the same tests that I did on the physical machine. The results were disappointing:

SQLIO PatternThroughput
Sequential read414 MB/s
Sequential write256 MB/s
Random read103 MB/s
Random write73 MB/s

As you can see, there is quite a difference particularly for the sequential operations. I'm trying to figure out why this would be before I put the host into production. I would not expect a 33% drop due to virtualization.

I've reran the tests several times both physical and virtual, have tried both thin and eager thick disks, reinstalling ESXi, using the built in LSI driver (5.34) and nothing makes a difference. The results are always very similar with physical being much better than virtual.

Here is a snapshot of the Crystal benchmark. Physical on the left, virtual on the right:

crystaldisk-phy-1gb.JPG

Anyone have any ideas?

7 Replies
rcporto
Leadership
Leadership

Have you tested your VM using the PVSCSI paravirtual SCSI controller ? And with RDM disk instead of virtual disks ?

Take a look the for additional details about PVSCSI: Which vSCSI controller should I choose for performance? - VMware vSphere Blog - VMware Blogs

---

Richardson Porto
Senior Infrastructure Specialist
LinkedIn: http://linkedin.com/in/richardsonporto
james411
Enthusiast
Enthusiast

Aw, sorry I left out that detail. Yes, I tried both PVSCSI and LSI adapters. Similar results, still far behind the physical. I could try RDM just to see the results, but I typically don't use RDM with my VMs and I wouldn't think virtual disks would have this gap between the physical performance.

Reply
0 Kudos
james411
Enthusiast
Enthusiast

So, I'm just wondering if it seems like something is definitely wrong enough that I should open up a case with Vmware support? Thoughts?

Reply
0 Kudos
JarryG
Expert
Expert

IMHO there is nothing wrong with your results. You simply can not expect to get the same results in your configuration, because you have one more system layer (disk -> vmfs5-filesystem -> vmdk-file -> ntfs-filesystem). If you need higher i/o transfer rates, simply pass-through disk to VM and use it directly.

Or get proper drives. With all do respect, 7200rpm/SATA drives are *NOT* enterprise drives at all. Get 10k/SAS, or even better, SSD. And when I'm at it, traditional drives do not have constant transfer rate over the whole area. It is because they have more sectors per cylinder on the outside tracks, than on those inside. This ratio can go as high as 3:1. So it does matter where (physically) on drive you store your benchmark-data, or vmdk-file.

_____________________________________________ If you found my answer useful please do *not* mark it as "correct" or "helpful". It is hard to pretend being noob with all those points! 😉
Reply
0 Kudos
james411
Enthusiast
Enthusiast

Yes, I understand there is another layer to go through, but for something that is supposed to be so lightweight, I wouldn't think it would be that big of a drop.

For comparison, I just installed Hyper-V on the physical machine and created a test VM with a dynamically expanding VHDX and here are the results. Much more in line with the results from testing on the physical box itself which would lead me to believe something is wrong with VMware...

crystaldisk-hyperv-1gb.JPG

Reply
0 Kudos
unsichtbare
Expert
Expert

Relatively slow spindles configured as RAID10.

I very rarely install an ESXi host these days that has DAS, but I recall terrible performance from RAID10 in the old days!

Also, where is the ESXi install? On the same RAID10, a separate array or flash media?

I think I would do the following:

  • Stripe the disks with RAID5+1
  • Re-install ESXi on flash media/usb
  • Redirect log files to the DAS
  • Check performance
+The Invisible Admin+ If you find me useful, follow my blog: http://johnborhek.com/
Reply
0 Kudos
dwigz
Enthusiast
Enthusiast

One thing worth looking at.  Do you have SMB signing turned on?  That seems to KILL file copies in virtual environments.  IF that is the case, you can look at tweaking the TCP stack, that can help sometimes.  It's a trial and error thing though. Turning of RCS often helps.  If you can, turning off SMB signing often speeds things up.

Reply
0 Kudos