VMware Cloud Community
timjwatts
Enthusiast
Enthusiast

Dell MD3860i PowerVault iSCSI SAN + ESXi 6

I'm trying this query here and on the Dell forums. The MD3860i has dual active controllers, each with 2 iSCSI 10g ethernet links.

Question: how does multipathing in ESXi 6(.5) work with this? Is any special setup or best practises required?

Reason: Each RAID disk group and LUN within the group is owned preferentially by one of the two controllers. Each of the 4 iSCSI ports has its own IP. Therefore multipathing access needs to be smart enough to choose the two correct IPs on a per LUN basis (LUN owner controller). Choosing the wrong IP (other controller) still works (via an internal pathway I guess) but at something like a 7:1 performance hit according to fio tests I've been conducting.

I do not see any special MPIO driver pack for the MD3860i SAN on Dell's website (and I have raised this with Dell) (unlike the EQLLogics which did have a special MPIO driver).

Is this something that ESXi figures out, or is it helped by the PV MD vCenter plugin?

On an aside, just noticed the vCenter plugin is only available for ESXi 6.0, not showing for 6.5 - just wondered if anyone else had noticed that? (Also raised with Dell).

Cheers - Tim

PS for our needs, the MD3860i is otherwise a good  choice - populated with 44 mix use 960GB SSDs arranged in 2 RAID6 disk groups and accessed by 4 hosts, I am seeing performance figures of upto 19k IOPS (4k blocks, total over all 4 hosts) and the ability to saturate all 4 10g links in an ideal test using random read/writes with 66% reads (which reflects our workload) - for a small unit it is very nippy.

4 Replies
daphnissov
Immortal
Immortal

There is no vendor-provided MPIO plug-in needed for this storage. vSphere's native NMP recognizes and supports this storage. The preferable multipathing policy is Round Robin.

VMware Compatibility Guide - Storage/SAN Search

timjwatts
Enthusiast
Enthusiast

Thank you.

Do you know how you ensure that round robin doesn't go to the wrong controller? But at the same time, allowing it as a possibility should one controller fail (what happens there is all LUNs have their ownership transferred to the remaining controller).

Can you set a metric on each path that RR respects?

Sorry if this is a dumb question - I don't have ESXi6 setup yet in order to see the options and the last setup I did was 4.1 with an EQLLogic - so I'm in the planning and soak testing phase right now.

0 Kudos
daphnissov
Immortal
Immortal

Do you know how you ensure that round robin doesn't go to the wrong controller? But at the same time, allowing it as a possibility should one controller fail (what happens there is all LUNs have their ownership transferred to the remaining controller).

The PSP takes care of all of this automatically. All you focus on is configuring and cabling the array properly.

timjwatts
Enthusiast
Enthusiast

Thanks ever so much -  your replies are very much appreciated.

I did find something concerning "Asymmetric Logical Unit Access" which apparently the MD38xxi series can do (optimal path selection for active-active arrays)

Checking again on the HCL, the MD3860i is listed as:

Firmware VersionTest ConfigurationDevice DriverMPP PluginSATP PluginPSP PluginFeatures
08.10SW iSCSINMPVMW_SATP_ALUAVMW_PSP_RRView

So I presume the ALUA handles the magic? This is new to me - I'm just convincing myself that I've covered all the bases for the best setup Smiley Happy

(Edit)

In fact this has led me back to the MD3860i manual, in a place would not have known to look before:

VMware ESXi 5.x does not have Storage Array Type Plug-in (SATP) claim rules automatically set to support

ALUA on the MD Series storage arrays. To enable ALUA, you must manually add the claim rule.

and

esxcli storage nmp satp rule add –s VMW_SATP_ALUA –V DELL –M array_PID -c tpgs_on

and

esxcli storage nmp satp set --default-psp VMW_PSP_RR --satp VMW_SATP_ALUA/ VMW_SATP_LSI

================================

Nothing about ESXi6/6.5 - but at least I know what to check for now.

0 Kudos