Gidday folks and thanks in advance for anyone with tips to offer.
I'm having "fun" trying to determine the best way to configure the iSCSI connections from a host server running ESXi 5.5U2 connected via dual DirectAttach cables to a dual controller HP MSA2040 SAN.
The SAN has 2 large vdisks running RAID10, each vdisk being a single datastore with one owned by Controller A (with B as a fallback) and the other owned by Controller B (with A as a fallback).
I have a pair of SFP+ ports on the server connected to the SAN using 10Gb DirectAttach cables with SFP+'s at each end and both the server and SAN seem to be happy with that arrangement. At the SAN end, one DA cable is connected to Port A-1 (first port on Controller A) and the other DA cable is connected to Port B-1 (first port on Controller B). The idea being datastoreA would usually be accessed via DAcableA on Port A-1 and datastoreB would usually be accessed via DAcableB on Port B-1 with either port being able to pick up control of a second vdisk/datastore if the other controller failed.
From what I can find online, VMware seems to favour NIC binding as a best practice for redundancy over multiple port iSCSI connections and so have a single NIC as the main iSCSI connection with the other NIC as a failover backup path for redundancy, but if I did that, it would mean whichever iSCSI connection was chosen as the active one, the controller connected to the standby one would not be accessible so one that controller's vdisk would failover to using the SAN's other disk controller for both vdisks killing the benefits of dual controllers sharing the workload.
How can I have ESXi 5.5U2 setup so that access to one datastore/vdisk is normally through one iSCSI port/cable and access to the other vdisk/datastore is normally through the other iSCSI port/cable with each one being the failover backup for the other?
I've never had to set-up a SAN in this fashion before so I'm very green to this. Any help or suggestions would be most appreciated. The HP best practices shows the host server cabled up with dual iSCSI cables with one in controller A and the other in Controller B but they don't show how to configure vSphere to make that work in practice. Also if we add a second server with the same setup except using the second port on each of the SAN controllers, are they setup the same way?
Thanks and remember some fairly plain English responses would be most appreciated. Apologies if I missed any typos!
Hey Steve,
I haven't worked with the MSA 2040 myself but I've used EVAs and P2000s- as far as I can gather the 2040 is ALUA aware- so while the controllers will essentially own their respective LUNs, vSphere will be able to see and access all LUNs over all paths and will automatically select the optimized path (eg. the path to the owning controller) for each. If you Manage the Paths for each datastore you should see the SATP that claimed the array is VMW_SATP_ALUA or perhaps an array-specific ALUA SATP if one exists, if this is the case then should you lose a controller or cable then the remaining path will become 'optimized' and vSphere will simply use that path instead. Check the PSP, the default is probably VMW_PSP_MRU which will always choose the most optimised path until if and when a better path becomes available- I'd suggest using this PSP if it's not already set. So check that your array is ALUA enabled (might need a firmware upgrade or something) and if so then vSphere should take care of this bit for you- see this doc for what's probably a more coherent way of describing it ![]()
As for the port binding, the way I've always done this is create a separate vmkernel port for each HBA/NIC port and override the teaming and failover settings on each to have only the one active port, the other being unused (and vice-versa for the other vmkernel port) - so vmkernel1 say would have vmnic1 active and vmmic2 unused, vmkernel2 would then have vmnic1 unused and vmnic2 active for example, then bind each individually. Again, see here for a better description!
Set up like this vSphere should always use the optimized path, that is the path via the physical NIC port that's directly attached to the controller that owns the destination LUN, unless that path is for whatever reason is unavailable in which case it'll use the other path which can still access the LUNs owned by the other controller thanks to ALUA (if you lose a cable) or your array failover (if you lose a controller).
I hope this all makes sense and helps somewhat, good luck!
