VMware Cloud Community
z1b903
Contributor
Contributor

Is VMFS (5.1) really capable of high I/O with multiple hosted .VMDK files?


Is this a simple question? IHAC that has recently migrated from ESX 4.1 to ESXi5.1 on a 2 host cluster. Along the way their storage back end blossomed to 7 VMFS volumes. I'll call the 1.8TB volume "Snow White" and the six 399GB volumes the "6 Dwarfs". The customer claims that they needed to create the 6 dwarfs due to "disk contention" issues. Typically stuff like SQL Transaction logs etc....

The interesting thing is, all 7 VMFS volumes are backed by (1) 5.2TB Raid5 11 spindle 15Krpm storage pool (IBM RSSM SAS RAID controller).

I'm proposing to consolidate the 6 dwarfs into 1 VMFS volume to eliminate stranded storage spread over the 6 dwarfs. Looking at the improvements in the 5.0 VMFS file system, its capability seems to blow away my need to store 10 - 12 .VMDK files. My hesitation in doing this is from the things I've read searching the Inet of old thoughts of doing just what the customer did back in the VMware 4.1 days, creating small specific purpose VMFS volumes for .VMDK files hosting SQL Logs etc...

My only correlation is that there must be a constraint in the data "path" capacity from the ESX kernel to the storage volume/LUN. Do you agree that this is a constraint to keep in mind?


Also consider that in this case the physical storage back-end is a singular 5.2TB Raid 5 set supported on 1 storage processor... Thanks for your thoughts Smiley Happy

Tags (1)
1 Reply
mcowger
Immortal
Immortal

EMC and others have demonstrated a single VM pushing over 1M IO/s...ESXi is almost never the bottleneck.

Your disks probably are.  Also, consolidating into a single LUN is bad.  1 LUN = 1 queue = serialized IO = bad latency.

Keep the LUNs - the extra queues are probably helping reduce latency.

--Matt VCDX #52 blog.cowger.us