VMware Cloud Community
manfriday
Enthusiast
Enthusiast
Jump to solution

EqualLogic PS6000E Setup

Hi guys,

I just got 4 EqualLogic PS6000E's in a few days ago.

They are each fitted with 16 1TB drives.

"The Plan" was to set up two PS6000's in our datacenter, and stick the other two in the datacenter of another division on our campus.. They are located about a block away.. And then replicate between them.

I have a few questions.

Currently we are using an MD3000i, and it is set up according to the dell recommendations, with iSCSI nics on separate subnets.

Is it possible to set up the EqualLogic boxes to use separate subnets as well?

I know you create a group with an IP address, and then it magically distributes the load to the ps6000's nics.

Do those nics all need to be in the same subnet?

If I create a group with IP address 192.168.11.1, can I have a config like this on my PS6000: ?

eth0 : 192.168.11.2

eth1: 192.168.11.3

eth2: 192.168.12.2

eth3: 192.168.12.3

Also, what is the best Multipathing policy to use?

The MD3000i is set up to use Round-Robin.

I did a quick experiment with RR on the PS6000, and it did seem to perform a little better, but I dont want to screw something up and end up opening a black hole or anything.

As for Raid Levels..

We used to use RAID 5 on pretty much everything, but we are considering other options here.

Raid 10 might cost us too much disk space.

So I'm leaning to RAID 50 with two hot-spares.

My boss is kinda excited about Raid 6. I have not used it, but what I have read leads me to believe that it's going to end up being a drag performance-wise.

And if the performance is not good I will never hear the end of it.

Any thoughts or insights before I stick these guys into production would be appreciated.

Thanks!

Jason

Reply
0 Kudos
1 Solution

Accepted Solutions
jbogardus
Hot Shot
Hot Shot
Jump to solution

Reference page 49 of the iSCSI SAN Configuration Guide, , for info on setting up the EqualLogic to work with VMware. Specifically multipathing.

All of the EqualLogic IPs do need to be on the same subnet for best practice configuration and proper opperation. I have done otherwise a couple times for specific management reasons during setup and the EqualLogic remained functional but logged some errors related to frequently reestabishing iSCSI client connectivity which would impact performance in production.

You can start at RAID 10 then convert to RAID 50 only when the extra space is needed. This conversion can be completed in several hours over the weekend without downtime. In a lot of scenarios RAID 50 performance will be fine - it all depends on exactly what performance your applications need. If you have a lot of VMs simultaneously generating I/O then the I/O pattern to the SAN ends up being random access, rather than sequential. With applications that are sensitive to high latency in I/O, the combined increased performance impact of random access and RAID 50 may be noticed and need to be solved by either/or a RAID 10 config or more spindles by adding another shelf to the storage group. If you are already planning on putting two shelves into one storage group at each site then you will already get some of the benefit of spreading the I/O over many spindles.

View solution in original post

Reply
0 Kudos
3 Replies
jbogardus
Hot Shot
Hot Shot
Jump to solution

Reference page 49 of the iSCSI SAN Configuration Guide, , for info on setting up the EqualLogic to work with VMware. Specifically multipathing.

All of the EqualLogic IPs do need to be on the same subnet for best practice configuration and proper opperation. I have done otherwise a couple times for specific management reasons during setup and the EqualLogic remained functional but logged some errors related to frequently reestabishing iSCSI client connectivity which would impact performance in production.

You can start at RAID 10 then convert to RAID 50 only when the extra space is needed. This conversion can be completed in several hours over the weekend without downtime. In a lot of scenarios RAID 50 performance will be fine - it all depends on exactly what performance your applications need. If you have a lot of VMs simultaneously generating I/O then the I/O pattern to the SAN ends up being random access, rather than sequential. With applications that are sensitive to high latency in I/O, the combined increased performance impact of random access and RAID 50 may be noticed and need to be solved by either/or a RAID 10 config or more spindles by adding another shelf to the storage group. If you are already planning on putting two shelves into one storage group at each site then you will already get some of the benefit of spreading the I/O over many spindles.

Reply
0 Kudos
jbogardus
Hot Shot
Hot Shot
Jump to solution

As far as RAID 6, it is provided to deal with concern over long rebuild times with the large SATA drives in use these days. The long rebuilds create a greater theoretical possibility of double disk failure, with a second disk failing before a rebuild of the first failed disk completes, causing all data on the array to be lost. NetApp introduced there RAID-DP solution a few years ago for this. In practice, my experience and the experiences I have heard of, are that disks fail very infrequently on EqualLogic shelves making the potential for a double disk failure very unlikely. If you already plan to put a very good redundancy solution in place by using replication between sites, the need to make your RAID solution that extra bit safer with RAID 6 isn't really necessary. It's all dependent on how bullet proof you really feel your environment needs to be, but I'd suggest it's not worth the capacity and performance losses related to RAID 6.

EllettIT
Enthusiast
Enthusiast
Jump to solution

It looks like the latest Equallogic firmware is an attempt to deal with an issue like you mentioned:

Firmware 4.2

Preemptive Drive Correction

In previous PS Series firmware versions, a suspect disk drive was marked as failed and the data

was reconstructed on a spare disk using parity data; during the reconstruction, performance was

degraded.

With the preemptive data copy functionality available in PS Series Firmware Version 4.2, data is

copied from a suspect disk drive to a spare drive before the drive is marked as failed. Because the

data has already been copied, no RAID reconstruction is necessary, so there is no impact on

availability and very little impact on performance.

Also, if you do update the firmware just remember the primary and secondary controller's switch places. Not a big deal but had me scratching my head as to what happened when I did it.

Reply
0 Kudos