VMware Cloud Community
Shanew1
Contributor
Contributor
Jump to solution

SAN storage configuration recommendations

We have a HP SAN with two fiber switches and a MSA2012FC storage array with 12 146GB drives. I am trying to decide how to configure the storage for our VI3 environment and am looking for recommendations.

At this point I am considering two RAID 10 arrays (6 - 146GB drives each) with either 1 (220GB) or 2 (440GB) LUNS defined per RAID array. The VMs are setup in a HA configuration so I think it would be best to have two separate RAID arrays versus one. That way I can split the HA pairs between the RAID arrays. If I create only one RAID 10 array then if more then one drive fails in that array we could lose everything. In the future we will be adding another MSA2000 enclosure so we will have another 12 146GB drives. At that point we can expand each RAID 10 array to 12 146GB drives to give us more storage space and performance.

Does this sound like a good plan? Any advice would be appreciated.

0 Kudos
1 Solution

Accepted Solutions
Texiwill
Leadership
Leadership
Jump to solution

Hello,

RAID 5 is recommended as the array can stay together if 1 drive fails. With hot spares this is a good setup. RAID 10, one drive fails and you are failed over. So this also works.....

I like Raid 5 and I setup my drive space based on running 10-12 VMs per LUN. Which is the the average # per LUN. It is all about redundancy.


Best regards,

Edward L. Haletky

VMware Communities User Moderator

====

Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education.

Blue Gears and SearchVMware Pro Blogs: http://www.astroarch.com/wiki/index.php/Blog_Roll

Top Virtualization Security Links: http://www.astroarch.com/wiki/index.php/Top_Virtualization_Security_Links

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill

View solution in original post

0 Kudos
9 Replies
JasonVmware
Enthusiast
Enthusiast
Jump to solution

Basically you have to take into consideration what is more important space and ease of management or redundancy. With having 1 large raid 10 group if more then 1 drive fails you will loose everything but you will get more space out of your raid and it will be easier to manage. With 2 smaller Raid 10 groups you have much more redundancy but have to manage more raids / data stores.

If you are concerned about hard drive failures and have no spares in your raid10's to assist with hard drive failures I would create 2 raid 10 groups. Later if you get more storage you can always Storage Vmotion your vm's and files around.

Also I'm not sure on how your fiber is hooked up but when possible try to have your fiber going to 2 different fiber switches to remove the possibity of any single point of failure at your fiber switch level. Even though ESX only does failover pathing if an entire fiber switch fails and all your fiber connections are hooked up to that one switch your enviroment will go down, just a thought.

kjb007
Immortal
Immortal
Jump to solution

With 12 drives available to me at 146 GB a piece, I would opt for a RAID5 and increase the amount of storage that I have available to me instead. I'd create 2 x 5 drives RAID5 sets, and leave 2 as spares. Unless you have high I/O requirements, RAID5 should give you more than adequate performance and won't destroy half of your usable space.

-KjB

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB
admin
Immortal
Immortal
Jump to solution

Some observations on this configuration are:

  • I dont see a hot spare allocated for these arrays. I would recommend a hot spare per disk tray or per controller if they can span trays in the same loop.

  • Consider the rebuild times when deciding whether a single or two RAID 10 arrays are warranted especially since you dont have a hot spare allocated and are in a critical state if you lose one drive in either array.

  • I am not familiar with the MSA but consider manually load balancing ownership of the LUNs to two separate controllers to increase throughput

  • Make some rough calculations on VMs per datastore/LUN when deciding how many datastores or LUNs to have.

0 Kudos
mike_laspina
Champion
Champion
Jump to solution

Hi

I's with Kjb, unless you have a solid reason to have more disk performance (specifically write optimized with RAID10) and less storage capacity I would use RAID5.

http://blog.laspina.ca/ vExpert 2009
0 Kudos
Shanew1
Contributor
Contributor
Jump to solution

Since most of you are recommending RAID 5 versus RAID 10, have you noticed any VM performance problems using RAID 5?

When we first started experimenting with VMware Server we seemed to achieve significantly better performance with RAID10, but this could have been related to the hardware.

0 Kudos
Texiwill
Leadership
Leadership
Jump to solution

Hello,

RAID 5 is recommended as the array can stay together if 1 drive fails. With hot spares this is a good setup. RAID 10, one drive fails and you are failed over. So this also works.....

I like Raid 5 and I setup my drive space based on running 10-12 VMs per LUN. Which is the the average # per LUN. It is all about redundancy.


Best regards,

Edward L. Haletky

VMware Communities User Moderator

====

Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education.

Blue Gears and SearchVMware Pro Blogs: http://www.astroarch.com/wiki/index.php/Blog_Roll

Top Virtualization Security Links: http://www.astroarch.com/wiki/index.php/Top_Virtualization_Security_Links

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
0 Kudos
kjb007
Immortal
Immortal
Jump to solution

VMware server and ESX are very different in their control of hardware. I have not noticed performance problems using RAID5 or RAID6. But, then again, my vm I/O requirements do not include sub-milisecond response times either.

-KjB

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB
0 Kudos
mike_laspina
Champion
Champion
Jump to solution

The only time I have issues with Raid5 is when the active write data footprint is larger than the cache limits of the storage server. Then you will see performance degrade. This primarily occurs with DB reorg activites.

It's not really a problem but you need to be aware of it.

http://blog.laspina.ca/ vExpert 2009
0 Kudos
BUGCHK
Commander
Commander
Jump to solution

> consider manually load balancing ownership of the LUNs

Good point. On the MSA2000 you would create two vdisks and assign each one to a controller. Then create a single volume per vdisk and present it to the server(s).

0 Kudos