VMware Cloud Community
Ollfried
Enthusiast
Enthusiast

Setup iSCSI w/ p4000 and separate Switches

Folks,

I want to do the following: I have two p4500G2 storage nodes with 2x 1Gb NIC each. I also have three DL380G7 with 8x 1Gb each. Last I have 2x ProCurve 2910 switches only for iSCSI. I want to create a fully redundant setup with two separate Layer1/2 networks, as I did with e.g. p2000G3 iSCSI storage:

- Network 1:

  • NIC1 of each p4500 node
  • NIC3 of each ESXi
  • Subnet: 10.0.32.0/24

- Network 2:

  • NIC2 of each p4500 node
  • NIC4 of each ESXi
  • Subnet: 10.0.33.0/24

I want to access the volumes using round-robin path policy for performance and resilience. The servers' LOMs are HW iSCSI capable and I can assign different IP addresses to each of the storage nodes' NICs.

Some of the documentation I read says, that I should bond the p4500's NICs and keep all iSCSI traffic on one subnet. But that would mean, that I have to create one Layer1/2 network instead of separating them.

Does someone have experience in doing so?

0 Kudos
10 Replies
taylorb
Hot Shot
Hot Shot

What would be the benefit of two seperate physical networks?   If you have redundant connections to redundant switches, then you have redundancy.  

I have 2 P4500 units and I have two seperate Cisco 6500 series switches in our data center.   The P4500 units are set up in a cluster (as recommended) and each has a bonded 2GB connection.  Each p4500 is connected to a different switch.    On the VMware side, I have set up a vswitch with two VMkernel ports and Two physical NICs.  Each NIC goes to a different switch. 

So at this point, if I lose a NIC - other NIC can still hit the switch. 

If I lose a switch, the ESX boxes are still connected to the other switch.  One P4500 becomes isolated but since they are in a cluster with a virtual IP for the pair, it doesn't matter as the ESX hosts can still hit that virtual IP.  The data is also clustered so it can access the data from either p4500. 

If I lose a p4500, the clustering and availablity of the p4500 units comes into play.  WIth only 2 units, make sure you install the (free) failover appliance as a VM to gain full failover function.   HP says you need 3 nodes or more in the cluster for failover. 

0 Kudos
Ollfried
Enthusiast
Enthusiast

Thanks for your answer! Is understand your setup, but I did not mention one thing: I actually don't have two switches, but four. The setup will be divided into two locations, two swicthes on each, but all hosts on one for now.

I have one 10Gb module in each of the switches and the idea is to create two 'fabrics' as one would do in a fc world. That's why I would like to create two separate networks.

I could also connect two switches with 10Gb over the distance respectively and connect both switches in one room with a trunk/lag, but doing so I will have to use STP.

I really appreciate your comments and would like to discuss this. Maybe I am a blockhead.

0 Kudos
taylorb
Hot Shot
Hot Shot

So are you going to have the two p4000s in seperate locations, as in one in each location?  

0 Kudos
Ollfried
Enthusiast
Enthusiast

Correct. I know that this is no multisite cluster and I know about FOM and so on.

I see a strange issue with my setup:

  • I configure 10.0.32.201/24 for nic1
  • ping works, node is reachable
  • I configure 10.0.32.202/24 for nic2 (which has no link yet)
  • communication stops
  • I disable nic2
  • communication works again
0 Kudos
vRick
Enthusiast
Enthusiast

Follow the vSphere iSCSI docs. They allow you to have full hardware redudency. Separate subnets have no benefit in your setup. Of course keep iSCSI storage on a separate subnet from other traffic but not paths to the same SAN. The best practices states to us a vlan dedicated to iSCSI if other traffic such as vMotion or FT is on the same switch. Be aware that some hardware iSCSI adapters are not fully support even though show on the list. You will find many horror stories about Broadcom 5709s but I haven't heard about NC382i cards. The big glitch is jumbo frames and hardware dependent iSCSI. If you have a problem fall back to SWiSCSI with jumbos. It performs better than HWiSCSI without jumbos. The offloading can't compare to the jumbo frames as long as you have the processor power. Most modern server have more than enough.

BTW check with the SAN manufacturer about how to connect the switches. Some require a LAG or cross link others say not to link the switches.

Rick Merriken, MCSE, MCT, CNE, CNI, VCI
Connectioneers, Inc.
410 740-6696

0 Kudos
Ollfried
Enthusiast
Enthusiast

I cannot configure IP addresses in the same broadcast domain that do not share a layer2 network (at least with p4000 VIP). If I use only one subnet I will have to connect all the switches together.

Maybe I can configure a cluster VIP without using it, but p4000 clusters need to have at least one VIP assigned. Maybe I can assign two VIPs and use them both as portal address.

0 Kudos
taylorb
Hot Shot
Hot Shot

With one at each site, the biggest point of failure is the p4500.  It's basically a Proliant DL185 and about as reliable as your average server hardware.   These are really designed to be used in clusters of 2-8 units and when doing so they become very reliable and increasingly fast.

With one at each site, I can see why you would want not want to bond them to a single switch.  Just connect one interface to each switch (on both the Vmware and HP Storage sides) and you have as much redundancy as you are going to get.   Same or different subnet doesn't really matter at that point.  You'll need a seperate vmkernel and vswitch for each connection on the VMware side.  

As a somewhat longtime HP P4500 user, I would STRONGLY suggest running them in a clustered pair at each site.

0 Kudos
Ollfried
Enthusiast
Enthusiast

Which target IP do you configure on your hosts? The ones of the bonds, or just the VIP?

0 Kudos
taylorb
Hot Shot
Hot Shot

You'd point them at the Virtual IP.  I don't know how it works with multiple IP though, as I just have the one with the 2 NIC bond in my setup.  

You also have to define the ESX servers on the Lefthand side so they have access to the LUNs. 

0 Kudos
Craer
Contributor
Contributor

I dont understand why you are wanting to run the Lefthand nodes with each nic connected to a seperate subnet.

Best practice is to (at each site), Connect nic 1 to switch 1 and nic 2 to switch 2 with the nic's bonded, with the same subnet across both sites. You should only have 1 vip.  For the lefthand nodes to replicate you want them on the same subnet.

Now if you were putting in 4 P4000 nodes and completly different sites you could take advantage of the San iq 9 multi site feature and that would run on seperate subnets.  But with only 2 nodes you wont do this.

Heres a good link on the P4000 implementation:  http://h20000.www2.hp.com/bc/docs/support/SupportManual/c02063195/c02063195.pdf

0 Kudos