divine_kane
Enthusiast
Enthusiast

HP MSA2012i and vSphere4 Storage expansion

Jump to solution

Hi Everyone,

I know this question has been asked before for ESX3.5 but I was wondering if there was any changes for vSphere 4.

We have a HP MSA2012i running dual Raid Controllers with 6 x 500GB disks running in RAID 10 with a single vdisk (owned to controller A) providing virtual machine storage to 3 x ESX 4.0 hosts.  As we are now using over 80% of the storage I have purchased a number of expansion disks (6 additional).  Now reading the previous posts it looks like I can either:

  • Build a seperate vdisk and present this as seperate storage to the ESX server and perhaps place this on controller B for redundancy
  • Build a seperate vdisk and add a extent to the current virtual storage (not sure what the risks are with doing this)
  • Any other option I have not yet thought of

Any help or advice on this would be greatly appreciated.

Many thanks in advance

Kane If you found my post helpful, please mark it as 'helpful' or 'correct'
0 Kudos
1 Solution

Accepted Solutions
a_p_
Leadership
Leadership

I would probably choose the first option. Create a new vDisk and present it on controller B. Not only for redundancy, but more for load balancing/distribution. In addition to this, you may consider to use e.g. RAID 5 instead of RAID 10 if you don't really need it. This gives you a lot more capacity and I don't think you will notice a performance gap on ESXi because of this. Depending on the requirements for the individual VM's you can then place them on the appropriate vDisk. However, remember that ESX 4.x still has a limit of 2TB minus 512 Bytes for a single LUN, so you may have to present the available disk pace in multiple smaller slices. With ESX 5.0 this limit doesn't exist anymore.

What I would definitely not do is to use Extents!

André

View solution in original post

0 Kudos
7 Replies
a_p_
Leadership
Leadership

I would probably choose the first option. Create a new vDisk and present it on controller B. Not only for redundancy, but more for load balancing/distribution. In addition to this, you may consider to use e.g. RAID 5 instead of RAID 10 if you don't really need it. This gives you a lot more capacity and I don't think you will notice a performance gap on ESXi because of this. Depending on the requirements for the individual VM's you can then place them on the appropriate vDisk. However, remember that ESX 4.x still has a limit of 2TB minus 512 Bytes for a single LUN, so you may have to present the available disk pace in multiple smaller slices. With ESX 5.0 this limit doesn't exist anymore.

What I would definitely not do is to use Extents!

André

View solution in original post

0 Kudos
PduPreez
VMware Employee
VMware Employee

Hi

I have to agree with Andre and go with option 1.

I do however want to add my 2 cents.

If I understand correctly this all runs in a single drive enclosure.

If this is correct, I do not see the benefit of using RAID 10 in this setup.

The reason I'm saying that is you loose allot of capacity and chances loosing 3 drives is the same as loosing the whole enclosure.

The inclosure is still the single point of failure. (that's if my presumption is correct)

My suggestion is to rather use RAID 6 which can tolerate 2 disk failures, with a hot spare configured (Just a thought Smiley Wink)

Create a RAID 6 set on the new drives, move all data from the old drives to the new RAID 6 set, trash the RAID 10 set and add the old 6 drives to the new RAID 6 set. So you will end up with much more capacity with 11 Drives in a RAID 6 configuration with 1 hot spare (or 2 if they want more redundancy).

(If you do have 2 enclosures, do not span a RAID set over them)

I hope this makes cense

regards

Pieter

Please award points if you find this helpful or correct :smileycool:

divine_kane
Enthusiast
Enthusiast

Thanks for the suggestions, I will be going for option 1 Smiley Happy.

In regards to your two cents PduPreez we are currently only using a single enclosure so understand what you say regarding RAID6 and redundancy.  In regards to redundancy it doesnt make any additional benefit of having RAID 10.  I originally chose RAID10 not just because of the redundancy capibility, but also because of performance as RAID6 incurs a 6x write penalty against RAID10's 2x.  Whilst I am not sure if it really matters in our setup we do run our SQL and Exchange services off of the SAN so I was looking to keep performance as good as possible.  Though I am not sure which I would go for now.

A good article I found on this recently is located here: http://www.techrepublic.com/blog/datacenter/raid-6-or-raid-1-0-which-should-you-choose/2689

What I don't understand is that everywhere I read on these forums there is a general 'do not use entents' statement, however VMwares own KB and Manual fully endorse their use?  If they are so bad then why is it recommended?

Kane If you found my post helpful, please mark it as 'helpful' or 'correct'
0 Kudos
a_p_
Leadership
Leadership

With vSphere 4.x and below extents were the only way to create a datastore with more than 2 TB (due to SCSI-2 limitations). Before vSphere 4.x there was not even the option to resize/grow a datastore, so again the only option was to use extents. With vSphere 5 this is now "history" since you can access LUNs with up to 64TB.

André

DSTAVERT
Immortal
Immortal

Extents probably aren't as evil as we all think however if you were to loose or corrupt an extent you would most likely loose everything. If all disks were in the same array it might be less evil but in your situation you would have two arrays and on two different controllers which would in my opinion much more evil. So if there is some situation that is only solved using extents, then use extents otherwise DON'T use extents.

-- David -- VMware Communities Moderator
0 Kudos
divine_kane
Enthusiast
Enthusiast

DSTAVERT wrote:

Extents probably aren't as evil as we all think however if you were to loose or corrupt an extent you would most likely loose everything. If all disks were in the same array it might be less evil but in your situation you would have two arrays and on two different controllers which would in my opinion much more evil. So if there is some situation that is only solved using extents, then use extents otherwise DON'T use extents.

Ok that makes sense.  Thanks for the answers and help all Smiley Happy.

Kane If you found my post helpful, please mark it as 'helpful' or 'correct'
0 Kudos
VeyronMick
Enthusiast
Enthusiast

Extents are evil.

Corruption aside the main issue with them is if they get detected as snapshot LUNs, they can be impossible to get working again.

From 3.5 when you could grow a file system using extents if you lose the partition table and don't have a old fdisk output lying arounds its not the easiest getting the partition table back the way it was.

Option 1 has my vote too Smiley Happy

0 Kudos