VMware Cloud Community
briggsb
Contributor
Contributor

HP P2000 SAN ISCSI, 2 node VMWare cluster - best performance?

Hi,

I'm really not experienced with VMWare at all, at this level, so we've brought in some consultants to carry out our implementation.

We've got HP kit, consisting of 2 x 2910 switches, a P2000 with 22 300GB disks, and 2 x DL380 servers with dual CPU, plenty of RAM etc.

The consultants have been a little vague in explaining the logic of the connectivity of the SAN to the servers, via. the switches.

What we've ended up with is puzzling me a little. We have 8 cables from the P2000 SAN (4 on each controller) into our switches (4 in each switch, 2 from each controller). But we've only got 2 x cables going from each server back to the switches? We have 2 spare physcial NIC ports on each server, I can't understand why you'd not want to use them all?

Can someone help me out here, I don't want to loose performance, but more to the point, I don't want to entrust this whole thing to a consultant if they're not doing the best thing for us during implementation?

I'm old school physical servers, but really, the more NIC's, the better potential performance?

Many thanks, Alan

0 Kudos
4 Replies
briggsb
Contributor
Contributor

Any thoughts anyone?

0 Kudos
admin
Immortal
Immortal

the document might help you.

http://www.pcconnection.com/IPA/PM/Brands/HP/Storage/~/media/F330970A766C49759021D82E5B513E6E.pdf?v=...

One reason might be to use the spare in an emergency scenario when the prod nics go down. Check with them if they are being used as a failover.


0 Kudos
MKguy
Virtuoso
Virtuoso

I'm old school physical servers, but really, the more NIC's, the better potential performance?

Right, but the key word here is "potential" or rather "theoretical". Most IO workloads are not bound by bandwidth but by IOPS the disk system is able to provide and the resulting latency. You can stress a storage heavily with many small IOs, which in total results only in very little bandwidth that could even fit into a 100Mbit network connection.

Operations that are usually bandwidth intensive are backup processes or moving around VMs (storage vMotion) between datastores, but the latter is obsoleted with the use of VAAI, which offloads storage operations to the SAN. The P2000 supports this as well.

So your SAN connects with a total of 8 ports, while your hosts "only" connect with a total of 4. This might seem asymmetric or "inefficient", but remember that the two SAN nodes also replicate each other and thus naturally require more bandwidth.

It's hard to tell without knowing what you're going to run on these hosts and the SAN, but I would say your config is fine and you won't see a real difference in performance whether you use 4 or 2 NICs to connect the hosts.

Here is also a best practices document for Lefthand/P2000 based SANs with vSphere:

http://www.vmware.com/files/pdf/techpaper/vmw-vsphere-p4000-lefthand-san-solutions.pdf

-- http://alpacapowered.wordpress.com
0 Kudos
briggsb
Contributor
Contributor

Thanks both for your replies, very helpful. I am far less concerned about the performance aspect of it all now, more so the resiliency of it, I think the 2 switches are supposed to be "redundant" incase of a single switch failing, but as there are 2 arrays, one to each NIC, surely there needs to be 2 x NICs for each array, 1 cable for array1 in switch1, 1 cable for array1 in switch2 and so on.

Thanks again, I will probe the consultant about this and let you know...

Alan

0 Kudos