VMware Cloud Community
COS
Expert
Expert

Very slow Disk I/O ESXi 5.5 U2

I have an HP DL360 G6 with the below hardware

CPU: 2 5530 (2.5GHz Quad Core)

RAM: 16GB

RAID Controller: P410i w/1GB FBWC with UltraCap

RAID: Set with 4 HP 6GB 10K RPM SAS drives in a RAID 5

SD Card: 16GB from Frys

I configure as above, install ESXi 5.5 U2 on just the spindles, build up a 2012 R2 VM and I get consistent typical speeds of 48 - 54MB/s using the SQLIO utility.

That's fine and expected.

Now, I plug in an SD card I scooped up from Frys: FRYS.com | PNY (pretty descent R/W speeds of 90Mb/s) and install ESXi 5.5 U2 on the SD card.

I blow away the current RAID set from the old config and create the same 4 disks in a RAID 5 as well.

I spin up a 21012 R2 VM on the RAID 5 spindles and I get consistent speeds of 14-16MB/s.

For sanity check, I reconfigured for both test setups above twice to test and the results were the same.

That's garbage speed there. It's not a show stopper because it LAB testing but I really wanted to use every bit (Byte?) of the RAID 5 storage.

Has anyone encountered this? Any idea how to fix? Or am I wasting my time and just stick with the spindles?

Thanks

15 Replies
HawkieMan
Enthusiast
Enthusiast

I am assuming you are using a SD card reader with USB connetivity, and then you have the following problems compared to a pure disk setup:

1. You share the same USB bus speed for disk read/write, VMs virtual memory, Swap space for esx, log for esx  and more (all is handled by esx before committed to disk)

2. You are limited to a single read/write channel at a time, with pure disks you have a heavy duty raid controller handling it all for you

3. With a raid controller there you have good caching of read/write requests, but with the SD card you run without any managed cache

So my suggestion is NOT to run from a SD card if you want performance. If you wish to tweak such a solution to have somewhat fair performance you will need to reconfigure log storage to disks, swap Space to disk, and memory cache to disks. And in sum these changes kind of blow the legs away from the possible benefit.

Also have in mind the performance numbers for SD Cards is given With single file IO in mind, while not considering the massive read/Write operations you have With a OS running from it.

COS
Expert
Expert

Well that sucks........lol

So what's the ESXi benefit of having the SSD built onto the motherboard of these HP G6, 7, 8 & 9 servers?

I'll stick to the spindles unless someone has an easy method to tweak it so it performs the same. I just thought I'd give it a go. I had my doubts about the out of the box install of ESXi on the SD card from the beginning.

Thanks

Reply
0 Kudos
HawkieMan
Enthusiast
Enthusiast

The benefit of having SD Cards is to save config on them, this makes them easier to Upgrade. Also when running Windows on those HPs you can use the SD as slow flash memory.

The benefit you can have on esxi With SD Cards is to use it as a buffer or flash memor, but With todays cost of SSDs i suggest using those instead. Have in mind one thing when it comes to SSDs, they are not made for RAID and you need high quality SSDs if you wish to run them in RAID, and in that case you will have more use for SAS disks in RAID and a SSD as flash memory.

Reply
0 Kudos
YucongSun
Contributor
Contributor

I think all others are answering the wrong question: He is saying that install esxi5.5 to a harddrive or to a SD card affects his VM performance (always on a HD).

The reason I don't know, but the previous answers are all way off rail.

Reply
0 Kudos
JarryG
Expert
Expert

"...I blow away the current RAID set from the old config and create the same 4 disks in a RAID 5 as well.

    I spin up a 21012 R2 VM on the RAID 5 spindles and I get consistent speeds of 14-16MB/s..."

A few years ago I had the same problem (although it was not ESXi host): I removed (erased) raid-array, and created again, with the same disks. I also got terrible performance. Even initialisation took much longer. The best explanation I found was: when re-creating the same array, old meta-data are found on disks and used, but in the new array raid-strips are not aligned properly with sector-boundaries. Then for single read-op, many read-write-read-write ops must be done.

This helped me: originally I had 6 disks in raid5. I removed one disk, created raid-array (this time raid5 with only 5 drives), checked performance (as expected), removed array, added that one disk, and re-created array (full, raid5 with 6 drives). This way, every raid-configuration was different from the previous one, so guaranteed nothing from the old array was used for new one. This worked for me, but I'm not sure it could help you...

_____________________________________________ If you found my answer useful please do *not* mark it as "correct" or "helpful". It is hard to pretend being noob with all those points! 😉
Reply
0 Kudos
COS
Expert
Expert

Let me further clarify....

Every time I install ESXi on only the RAID spindles and not on the SD card and have my VM is on the spindle datastore, my VM's disk performance is as expected of of 48 - 54MB/s.

Every time I install ESXi on an SD card and my VM resides on a spindle datastore, my speeds are 14-16MB/s.

For each test I re-create the RAID set the same.

I don't believe that any RAID metadata get's reused because when I go from a SD test to Spindle test, the speeds increase. When I go to Spindle test to SD test, it's slow.

Thanks

Reply
0 Kudos
mineroller
Contributor
Contributor

I am pretty sure I am having exact same issue as OP:

I have a lab server which I have configured a 4-disk RAID-5 array.

My setup is:

- I run my ESXI 5.5 U2 on a USB stick

- LSI 9690sa Controller with Battery Backup (Full R/W cache enabled).

- 4x 1TB SATA disks, 7200rpm

- RAID-5 with 64KB stripe

- just one Windows 2008 R2 VM on the RAID

I get about 10-15 MB/s max speed on this thing as well.

During VM bootup/operation, I get even more horrible performance - ~300 IOPS / 1.5-2MB/s.

On a native setup (no virtualisation, running Windows/Linux directly on the box), I can get up to 120-150MB/s sequential throughput using same 4x SATA drives with thousands of IOPS. For the life of me, I still can't figure out what could really be wrong.

I would love to see some other ideas ...

Reply
0 Kudos
Alistar
Expert
Expert

Hello,

I have seen a similar issue on another post in these forums - the user claimed that ESXi's IOs / throughput are faster when booted up from SSD than from traditional spindles (or an SD Card?)

However now one thing has come to my mind - have you redirected the scratch (VMware KB: Creating a persistent scratch location for ESXi 4.x and 5.x ) disk and syslogs (VMware KB: Configuring syslog on ESXi 5.x) directory to be stored on your spindles? Also check out your Swap Location. Maybe ESXi has some "throttling" mechanism for data logging. I might be completely wrong but I guess it is worth a try.

Stop by my blog if you'd like 🙂 I dabble in vSphere troubleshooting, PowerCLI scripting and NetApp storage - and I share my journeys at http://vmxp.wordpress.com/
Reply
0 Kudos
AFairthorne
Contributor
Contributor

I also seem to be having this issue with more or less the same setup, all be it slightly different drives in use.

I've also thrown in an SSD just to use for building on to speed things up a bit, but find the performance is appallingly slow! Compared to when I was using it before with spinning disks this is unusable. Looks like i'll need to go back to running ESXi from the drives instead for now.

Does seem very strange though as countless people recommend running from SDCards, so it must be a config issue some where, or a setting that needs tweaking.

I have relocated my logs to a directory created on the disks so no issues there.

My SDCard is a 32GB Class10 extreme rated at 45MB/Sec, which i've seen when i was using it for video's before.

Be interesting to hear if anyone else has any thoughts.

Reply
0 Kudos
COS
Expert
Expert

I found that NOT using the Secure Digital storage device for the ESX install gave me consistently faster iops so I avoid using the SD slot in our DL360 servers.

I've read a lot of people do run that way but I wonder if they have run i/o tests using the SD Slot for the ESX install and using the spindle storage for ESX install.

Personally and professionally I would opt for the best i/o performance considering the cost of a few dollars of an SD media card........lol

Reply
0 Kudos
COS
Expert
Expert

@

Reply
0 Kudos
HendersonD
Hot Shot
Hot Shot

Interesting discussion. I have an IBM Bladecenter with 8 blades. These blades came out of the box with an internal I believe 16GB solid state stick with ESXi preinstalled. We have upgrade ESXi several times and are currently running ESXi 5.5 U2 on all 8 blades. We have a Pure Storage Array SAN and use three of the blades for our server farm and 5 of the blades for our View environment. Great I/O all the way around. The only thing the internal small solid state stick is for is to boot ESXi. My understanding is all other I/O is to my SAN and very little I/O goes to the internal solid state stick once vSphere is fully booted.

A few years back we had older blades in this same Bladecenter chassis and each blade had two internal mirrored hard drives with ESXi installed on these drives. At that point we had a Netapp SAN. My current setup is a lot faster but of course a lot has changed, new SAN, new 10gig switches, and new server blades so it is really apples and oranges

Reply
0 Kudos
COS
Expert
Expert

"These blades came out of the box with an internal I believe 16GB solid state stick with ESXi preinstalled."


Now if you have spare time and another blade, test the i/o of the blade with ESX on either mirrored spindles (10K RPM SAS) or mirrored 2.5" SSD's.

My theory is that the faster the i/o on the ESX install drive the faster the resulting i/o for your VM's.

I will test the theory again later with iSCSI Datastores and see if it still holds true.

See my post here on the tests I ran....

Anyone ever try this.....Disk I/O Performance test on DL360 G6 with P410i w/1GB FBWC

Reply
0 Kudos
zoran4afc
Contributor
Contributor

Forgive me if I missing the point here, but why you install the ESXi to SSD or SD at all?

I installed esxi 5.5 and earlier to 8 or 16 GB USB stick. The ESXi installation recognizes that the installation media is USB stick, so the ESXi is not running from USB, but from RAM disk.

No speed problem at all.

You only need the scratch disk to be placed at datastore.

User SSD, if you have one, as read cache disk, or as a cache disk in the Virtual Storage.

Reply
0 Kudos
COS
Expert
Expert

A USB stick is fine and all but someone can come buy and steal your stick and you might not know it till it's waaaay too late.

Have you tested your I/O performance compared to ESX installed on your USB stick and ESX installed on a Mirrored set of SAS 6G 10K RPM drives?

If so which one gave the best I/O performance on your VM?

IMHO, just because "you can" doesn't mean it's the best use of the hardware. I always go for what gives me the best performance.

I'm going to test some Gen 9 DL360's and see what gives the best VM performance for disk I/O.

VM on Spindle drives and ESX on SD card Slot?

VM on Spindle drives and ESX on USB Flash drive?

VM on Spindle drives and ESX on Spindle drive?

I've tested this on DL360 G6 servers and the results are on this thread...

Anyone ever try this.....Disk I/O Performance test on DL360 G6 with P410i w/1GB FBWC

Maybe in the Gen 9's this changed? Hopefully.

Reply
0 Kudos