VMware Cloud Community
Khue
Enthusiast
Enthusiast

More Storage Performance Disucssion: SVC & 3.5

Before I get too much farther with my ESX environment, I wanted to get some opinions from people who might have a similar configuration. I wanted to validate some best practices for storage configuration. The first thing I would like to make completely clear is that I am using an IBM SVC. The storage configuration in question is an mdisk group with 2 RAID-5 arrays on the same enclosure on different controllers (EXP 810). One array is comprised of 7 disks, the other is comprised of 8 disks. All disks are FC and the SAN is connected at 4gbps everywhere. This totals 1950 IOPS. This is a very loose number as it is simply a guideline created by IBM but you get the picture. In my environment all ESX hosts see both of the vdisks being served from the mdisk group (2 1.77tb vdisks, think LUNs). I have 5 ESX hosts.

That being explained, should I change my storage structure to more, smaller vdisks (again LUNs) or is the configuration I am using ok? Each of the vdisks has 975 IOPS available to it (not to mention is using all 13 spindles). By moving to a more, smaller vdisk configuration, you would esentially take the 1950 IOPS number and segment it up between N number of vdisks you create.

Just looking for opinions. Thanks!

0 Kudos
4 Replies
awong505
Enthusiast
Enthusiast

This truly depends on your environment and workloads. Your mileage will vary essentially and may be driven by either ease of management or alternatively performance. For example, if you have one server (Exchange or SQL for example) which has some high I/O requirements it could impact other VM's running on the same shared spindles. In that case you may choose to run fewer other VM's on that LUN or LUN's but it is really a performance sizing exercise based on the IOPS requirements of the VM's vs the IOPS you can provide given the number of spindles you have available and the different ways you can carve the spindles up. How much segmentation or "isolation" you need could also be a large driving factor as you may have one or many applications that may need guaranteed performance and you have to treat a bit differently. Alternatively, you may not have any need for all that and ease of management may be more desirable in which fewer larger LUN's may make more sense. If you do take that route, it is generally a good idea to mix the VM's with both high and low IOPS requirements evenly between the LUN's so that you don't have one LUN with all the heavy hitters on it.

Khue
Enthusiast
Enthusiast

Thanks for your input I appreciate it. It seems like the best thing for me to do is to go ahead and partition up the large LUNs into smaller ones. Even though each smaller LUN will have smaller IO, may be less of a headache in the future. Thankfully SVMotion will make this process a lot easier.

0 Kudos
RParker
Immortal
Immortal

Thanks for your input I appreciate it. It seems like the best thing for me to do is to go ahead and partition up the large LUNs into smaller ones. Even though each smaller LUN will have smaller IO, may be less of a headache in the future. Thankfully SVMotion will make this process a lot easier.

The SIZE of a LUN is irrelevant. If your VM's are 256GB files, then a 1TB LUN isn't that big, it's only 4 VM's. The number of active files on the SAN (vmdk, iso, etc..) determines the performance, NOT the size of the LUN. If you have a lot of very small VMDK, then size depends on those VM sizes, there is no set hard and fast number for LUN's. It depends on SAN, the connectivity number of disk spindles, lots of factors.. it's not JUST SIZE that is taken into consideration.....

0 Kudos
rsingler
Enthusiast
Enthusiast

I agree with the other two guys. It's a matter of the IOPs you want to acheive out of your configuration. Splitting things up isn't necessarily going to give you better performance, just isolation of IO. You just need to review your overall VM IO profile and ensure what you have created will work for you.

The only other thing to think about with your config of such large LUNs would be SCSI reserves. As stated before, if your VMs are 256GB each then you will only have 4 VMs per LUN. If they are 20GB each though, then that would be 50 VMs on each LUN which will most likely cause issues with SCSI reserves. Generally you want to start out around 10-15 VM max per LUN and scale up from there by adding low IO VMs as you get to where you want to be. Eventually you will hit a limit.

If I were you, and I didn't need all 1900+ IOPs for a single VM, I would split the LUNs into 4 x 512GB and go from there. But then again, I'm not you... Smiley Wink

0 Kudos