VMware Cloud Community
ras2a
Contributor
Contributor

HP StorageWorks x1600 - getting best out of SAN

Hi all,

Here is our kit:

- x1 HP StorageWorks x1600 G2 SAN (x6 2TB SATA 7.2k HDDs in RAID 10, giving approx 6TB space) - Runs Windows Storage Server 2008 R2 Std

     - x3 LUNS presented to the Hosts over iSCSI s/w initiator (this translates to x3 large Data Stores in vSphere)

- x2 HP DL380 G7 (x8 300GB SAS 10k HDDs in RAID 10, giving approx 1.2TB space)

- x1 HP DL380 G8 (x8 300GB SAS 10k HDDs in RAID 10, giving approx 1.2TB space)

Note: We're running vSphere 5.1

Couple of queries:

We have the main VMDKs for our File Server stored on one Data Store (x1 LUN on the SAN) - this is simply due to the amount of data. We also have MPIO configured on the Hosts for load balancing of the iSCSI traffic (separate subnet).

However, although the SAN itself has x2 physical NICs, the HP documentation recommends to configure one of the NICs for management access and one for the storage network. It does, however, give the option to fail over to the other NIC, should the first NIC fail - excerpt from online help file below:


  1. Under Preferred Storage Network, select a network  that will be designated to manage all iSCSI traffic.

  2. Under Available for Failover, select a network  that can be used to manage iSCSI traffic in the event that the preferred storage  network fails. In the event of iSCSI initiator failure, iSCSI traffic fails over  to the designated network.

I currently have the SAN management NIC on a different subnet than the SAN Storage NIC, so common sense would dictate that if the first NIC fails the Hosts will not be able to see the storage because they are attempting to connect to the iSCSI target on a different subnet from the SAN. I do not see any other configuration options within the Windows Storage Server to configure MPIO etc. The only thing I can think to change would be to ensure that the second NIC is on the same subnet as our other iSCSI Network NICs in the hosts (each host has x2 NICs for iSCSI network)... however, this would mean SAN management traffic going over the iSCSI storage network which is against best practice. Any advice on how to correctly configure this would be much appreciated, unless I've I'm limited by what the SAN can do?

My last question is regarding 'average' read/write latencies for decent performance? Bear in mind, our SAN is using SATA disks running @ 7.2k, configured in RAID 10. (requested SAS disks, but was deemed too expensive). Please see attached a image of performance chart (below) of the Data Store where our two main File Server VMDKs are held. I'm just wondering if moving one of the VMDKs to one of the other Data Stores would improve performance (lower latency), or would this make little difference as all SAN spindles are in a single RAID10?

What are 'decent' latency times for SATA 7.2k's in RAID 10? I'm seeing averages of around 10-15ms ... does this indicate major performance bottleneck?

Any assistance you guys can give would be most appreciated indeed

Regards

-ras


0 Kudos
3 Replies
William22
Enthusiast
Enthusiast

Hi

Welcome to the communities.

You should configure cluster storage to over come disaster or storage fail.

"With normal actions you get normal  results."
0 Kudos
mcowger
Immortal
Immortal

I'm not qualified to discuss Windows Storage server...but I can talk about latencies.

10-15ms for SATA disks actually isn't that bad, and doesn't indicate a major problem.  That *is* on the edge of starting to become an issue (for example, Oracle wont talk to you about performance issues if you get above 15ms)...So I wouldn't expect you could put much more workload on those disks, but they aren't doing poorly right now.

*generally*, 10-15ms is considered the upper bound for 'good' performance.

--Matt VCDX #52 blog.cowger.us
0 Kudos
Josh26
Virtuoso
Virtuoso

ras2a wrote:

My last question is regarding 'average' read/write latencies for decent performance? Bear in mind, our SAN is using SATA disks running @ 7.2k,

For those hard disks, and particularly on the X1600 platform, what you are seeing is pretty good. Note that the RAID level has a very minor impact on performance on this hardware.

The X1600 supports a NIC team, but that will effectively operate on an active/passive basis.

0 Kudos