jonb157
Enthusiast
Enthusiast

IBM SVC multipathing confusion

Jump to solution

We are running ESX 3.5 update 4 so I know fixed policy is now supported, but the real question is which should we be using? Also, do we setup paths manually or is it Native Multipathing for IBM SVC? In the past I've used Netapp and installed the tools on the ESX console and managed via that, but I don't believe IBM SVC has anything like that. Also our SVC is version 4.2.1.6

0 Kudos
1 Solution

Accepted Solutions
thecakeisalie
Enthusiast
Enthusiast

There isn't any software from IBM that does the same as the NetApp HUK config_mpath script does. IBM didn't do anything this elaborate for VMware, so you have to setup all preferred paths manually.

For the SVC specifically, I'd strongly recommend keeping the same path for each LUN - typically using the esxcfg-mpath you can set preferred paths for each LUN, then set the preferred path using alternating LUN #'s through vmhba1 to target1 and vmhba2 to target2 - so for example you'd have two paths to LUN0 - one to each target through each HBA (single initiator/single zone on the switch); then set the preferred path to the vmhba1 target because it's LUN0 - then for LUN1 you'd set the preferred path to vmhba2 - this way you're (albeit manually) loadbalancing between the targets on the SVC.

The NetApp config_mpath makes this process a LOT easier, because it goes and logs into the filer for you and finds out what the preferred path to each LUN is from it directly.

View solution in original post

0 Kudos
2 Replies
Khue
Enthusiast
Enthusiast

I am not sure what you are talking about here, but when I have done mapping between my ESX hosts and my SVC cluster all I have had to do is present the vdisks in the appropriate order to the host devices. The native software on the ESX servers pretty much handles all the multipathing so no 3rd party installs are needed. Here is my exact process:

-Manually setup connections b/t ESX host and BOTH fabrics

-Log into Brocade 2005-B16 switches (or equivalent switches) and setup zones for BOTH fabrics. The key here is that each port on the host device (ESX 3.5u4 server) sees 2 SVCs. SVC managing fabric A and SVC managing fabric B. Again this is in my environment and your's may vary

-Once the Zoning is done log into the SVC system and go into the "work with hosts" subsection within the cluster. Click on hosts. Bypass the filter.

-At the drop down control box select "Create a Host" and click "Go."

-The WWPNs of the host should be present as they are new unallocated wwpns. Assign these. If you do not see the wwpns within this section, your zoning is wrong and you must straighten that out. Do not manually input the wwpns for the hbas.

-Finish the UI setup of the hosts and then go to "Work with virtual disks." Bypass the filter (or you can filter if you know the name of the vdisk you would like to map), select the appropriate vdisk and select "Map vdisk to host." Select the host from the ui and remember to check the "Force..." control box as you will have multiple connections to the same vdisk if you are using DRS/HA. IMPORTANT: MAP VDISKS TO THE HOST SERVERS IN THE APPROPRIATE ORDER ONE AT A TIME. THE IDENTIFIERS HAVE TO BE IDENTICAL ACROSS THE ENVIRONMENT. IF YOU DO NOT, YOU ARE WRONG.

-Once the first vdisk is mapped log into the VIC connect to the server.

-Go to configuration tab

-Go to storage adapters

-Select "Rescan..." Only scan for new vmfs volumes. This will save you some time. Scanning for vmfs volumes is routinely faster then scanning for both vmfs volumes and new disk volumes.

-The UI will discover your new vmfs volume

-Once the new vmfs volume is discovered, you will not be able to see the pathing info until it is actively in use. To deal with this I usually start up my service virtual machine (win2k3 utility server for dealing with resizing vmdks and stuff) on that particular storage volume using SVMotion. You should see the active path and the available paths.

-When you have verified that you are comfortable with this setup you may add another vdisk in the same manner.

Hope this helps. Again, using the SVC and the out of the box functionality of VMware, I was not required to add any special tools to the ESX servers to get this process to work. I am using native multipathing but remember, the SVC abstracts a large portion of the multipathing so while you may only see one path at a time, both paths are present. I have observed this during service period when I have had to take down a fabric at a time. Failover was seemless.

thecakeisalie
Enthusiast
Enthusiast

There isn't any software from IBM that does the same as the NetApp HUK config_mpath script does. IBM didn't do anything this elaborate for VMware, so you have to setup all preferred paths manually.

For the SVC specifically, I'd strongly recommend keeping the same path for each LUN - typically using the esxcfg-mpath you can set preferred paths for each LUN, then set the preferred path using alternating LUN #'s through vmhba1 to target1 and vmhba2 to target2 - so for example you'd have two paths to LUN0 - one to each target through each HBA (single initiator/single zone on the switch); then set the preferred path to the vmhba1 target because it's LUN0 - then for LUN1 you'd set the preferred path to vmhba2 - this way you're (albeit manually) loadbalancing between the targets on the SVC.

The NetApp config_mpath makes this process a LOT easier, because it goes and logs into the filer for you and finds out what the preferred path to each LUN is from it directly.

View solution in original post

0 Kudos