willc2004
Contributor
Contributor

MSA 2040 Direct attached iSCSI

Jump to solution

We are setting up a VSphere 5.5 environment - 3 hosts directly attached to an MSA 2040 with iSCSI.  Is directly attaching the host to the MSA an issue or should we be using switches?  Our hardware vendor is saying

 

"in order for VMWare volumes to have access through shared storage it has to through a switch. You can run into issues where if they are directly attached then the VMDK’s on one host would not be seen or accessed on the other hosts (high availability or Vmotion)."

 

Also

 

"Support of direct attach to the msa via iSCSI is OS dependant.  Some OS'S support it others don't. MSA doesn't support direct connect to Vmware via iSCSI. "

I've talked to HP directly and they state direct attached thru iSCSI is supported with the MSA 2040 and VMware.

Is anyone running this setup that can put my mind at ease? 

Thanks.

 

1 Solution

Accepted Solutions
DigitallyAccura
Enthusiast
Enthusiast

Hello,

Yes you can use a MSA 2040 directly attached.

I am using this configuration myself using 10Gb DAC cables, also have sold this solution to clients.

Please note that each host should have 2 connections to the SAN (if the SAN has dual controllers). Each host should have a connection to each controller on the SAN.

4 Hosts directly attached is supported (the controllers have 4 ports on each one).

Please note that multiple subnets need to be used and configured to allow for proper path discovery. Also a BIG note to emphasize is DO NOT USE iSCSI PORT MAPPING! Smiley Happy

Make sure to enable round robin. Please note that you will only see one active path performing I/O per datastore (since only a single controller owns a volume), however in the event of cable failure, or controller failure, it will failover to the other controller.

I've been using this setup for a couple years and absolutely love it!

View solution in original post

0 Kudos
9 Replies
landouk
Contributor
Contributor

If you have enough ports on the back of the MSA to attach 3 hosts and still have controller redundancy, then it'll be ok. But I suspect each controller only has 2 ports, so that means if you want redundancy, you can only attach 2 hosts to each controller.

0 Kudos
DigitallyAccura
Enthusiast
Enthusiast

Hello,

Yes you can use a MSA 2040 directly attached.

I am using this configuration myself using 10Gb DAC cables, also have sold this solution to clients.

Please note that each host should have 2 connections to the SAN (if the SAN has dual controllers). Each host should have a connection to each controller on the SAN.

4 Hosts directly attached is supported (the controllers have 4 ports on each one).

Please note that multiple subnets need to be used and configured to allow for proper path discovery. Also a BIG note to emphasize is DO NOT USE iSCSI PORT MAPPING! Smiley Happy

Make sure to enable round robin. Please note that you will only see one active path performing I/O per datastore (since only a single controller owns a volume), however in the event of cable failure, or controller failure, it will failover to the other controller.

I've been using this setup for a couple years and absolutely love it!

View solution in original post

0 Kudos
willc2004
Contributor
Contributor

Thank you for the responses.

I've been using the MSA 2040 directly attached to 3 hosts for a couple of weeks without a problem.  HA and VMotion have been working flawlessly.

0 Kudos
RichardM1974
Contributor
Contributor

How did you handle IP Addresses in this scenario? A different subnet per each SAN controller?

i.e.:

Controller A - 172.16.80.0/24

Controller B - 172.16.90.0/24

and then each of the dual ports of the server adapters configured for the correct subnet that they are directly connected to?

0 Kudos
benjaminsmithhp
Contributor
Contributor

*Re-Posted using my HPE rather than HP account*

Hi there,

I just stumbled across this post and I must inform you that this statement is incorrect.  I'm not sure who the original poster spoke to at HP (now HPE), but this is not and has never been certified and is therefore definitely not supported by us (HPE), which as the storage vendor is the most important view.

Whether something works or not does not equate to support, and given the number of views this thread has, and that there are recent comments, I felt it necessary to step in.  Of course if you're prepared to take the risk of running an unsupported configuration then that is of course, your choice - I just want you to be aware.

In terms of what is supported, you must refer to the HPE SPOCK (Single Point of Connectivity Knowledge) which is the last word on these matters, it even supersedes the VMWare HCL (which also says check with the Storage Vendor), though they should of course ideally be in agreement: HPE Storage Single Point of Connectivity Knowledge - SPOCK

If you're buying from an HPE certified partner, they are able to validate this information on your behalf.  Partners also have access to HPE solution architects to verify and validate information where they are unsure.

If you look within the Compatibility Tool of SPOCK for the combination of VMWware vSphere 2013 (ESXi 5.5) and the MSA 2040/2042 you will see this caveat:

iSCSI Initiator Notes
1) All standard ProLiant NICs are supported in conjunction with the OS iSCSI Initiator. Direct connect is not supported.

Furthermore, if using a CNA (hardware Initiator), there is a column for whether Direct Connect is supported, and there is a 'N' denoting 'No', for all supported adapters.

Regarding the recent question about IP subnetting, the best practice would be that ports A1, B1, A3 and B3 be on one subnet, and A2, B2, A4 and B4 be on another.  Mixing this around is acceptable, however for redundancy you would not place Controller A ports on one subnet, and Controller B on another.

If you're wondering where I am getting my information, I'm the Technical Presales lead for the MSA (and all of Entry Storage) for HPE.

*Update*

To be clear, it is VMware themselves that omit the documented evidence to indicate support for direct-attached iSCSI SAN arrays.  This restriction is therefore imposed not by HPE but rather VMware and is common across all vendors.  For FC, direct connection is specifically mentioned and supported with specific combinations of Array and HBA.

Kind regards,

Benjamin Smith
Entry Storage Lead
TecHub EMEA – Prague

Hewlett-Packard Enterprise

benjamin.smith@hpe.com

0 Kudos
jsprinkleISG
Enthusiast
Enthusiast

To be clear, it is VMware themselves that omit the documented evidence to indicate support for direct-attached iSCSI SAN arrays.  This restriction is therefore imposed not by HPE but rather VMware and is common across all vendors.

Regarding this statement, I'd like to point out that other storage vendors do indeed support a direct-attached iSCSI configuration, e.g., Dell EMC Unity arrays. See their document Configuring Hosts to Access VMware Datastores. In Chapter 3, entitled Setting up a host to use Unity VMware VMFS iSCSI datastores, there's a note that says, "Directly attaching an ESX host to a Unity system is supported."

So, benjaminsmithhpe​, wouldn't it make sense for HPE to test this configuration and change their policy to support direct-attached iSCSI with MSA? That is, unless there actually is a technical reason not to do so.

Consider these points:

  • People report that they do successfully use this configuration with MSA
  • VMware merely "omit the documented evidence" - they don't specifically prohibit it
  • The storage vendor has the last word on supportability
  • By not supporting it, HPE puts their storage solutions at a competitive disadvantage
0 Kudos
_Vicente
Contributor
Contributor

Opinion. Vmware don´t be transparent about iscsi direct attached SAN.

Evidence. The VMware Compatibility Guide - Storage/SAN Search don't show compatibility for any Array with iscsi direct-attach. But it's show information about FC and SAS conections (direct or switch).  You can select it on Array Test configuration

0 Kudos
benjaminsmith_h
Contributor
Contributor

Hi James,

Apologies for the delay, I have been off work for the past two weeks.

This is speculation, but it is most likely that Dell EMC and VMware are the same company that has led them to offer support.  It is of course not without irony however, as this would mean that whilst the potential problems of offering support in lieu of the OS vendor are in their view navigable, creating the documentation set in the first place is not.

It is indeed the storage vendor's prerogative to offer support where the OS vendor does not, but firstly let's be clear: VMWare do not support this.  The absence of a statement to the contrary does not equate to a support statement; if a configuration is not explicitly in the HCL then it is not supported.  I should also mention, that the vast majority of our customers are not satisfied with support only being listed on our support streams within SPOCK, rather they insist on seeing a likewise statement within the OS vendor's streams, in this case the HCL.

As an example: for fibre Channel, VMware provide us with a bench test for which we return the logs; within these tests are options for both direct-connect and switched environments.  If the logs contain what they want to see, then a given configuration will be added to the VMware HCL and thus considered supported.  As close to as in unison as is possible, we then update our support streams as the de facto support statement as the storage vendor.  Note then, that the direct connection 'options' for these tests are absent for iSCSI.  Without a set of VMware specified tests, the storage vendor would have to take the risk of offering support.

Now to the point of why HPE will not make this leap.  Firstly, I should mention that these are discussions that in various forms I have had and continue to have within engineering, and it's not an unknown gap.  Nevertheless the outcome is always the same, which is that we will not support something which the OS vendor themselves do not. 

Q: Do customers do it anyway?  A: Yes, some do.  It is usually those who have either not sought the correct input from HPE or our partners, or who have received incorrect information; for example the incorrectly accepted answer to this thread by 'DigitallyAccurateInc'

Q: Will it cause issues during a support call? A: It depends.  If the problem is in any way related to storage, then whether the configuration is supported should always come into question.  I would think that some support cases ignore this, and others do not.  It certainly creates an element of risk, which is a bad word in business.

I hear you regarding the competitive disadvantage, but I would say it not significant given iSCSI being the least used protocol in direct connect environments; but I fully agree that this is less than ideal.  In fact, I would very much like to see this as supported, but I can only suggest that customers lobby VMware to make the necessary changes so that we can offer official support.  Until then, we will not change it.

I encourage everyone to speak to the storage vendor when in doubt.  Forums are not the best places to get answers to this kind of thing, here for example the wrong answer came before the right.  That said, I do see the value in a discussion as to 'why not' etc., as that gives in this case HPE, a chance to explain to you our position.  Of course it's imperative that we're in on the discussion, so perhaps the real problem is that this thread is on the VMware site and not the HPE...not to say I haven't seen wrong answers there too from time to time Smiley Wink

Kind regards,

Ben

jsprinkleISG
Enthusiast
Enthusiast

Thanks, benjaminsmith_hpe, for your insight into the some of the reasoning behind HPE's policy on this. You make some good points. I went ahead and submitted a feature request to VMware to support iSCSI Direct Attach.

I will, however, point out that Dell EMC are not the only major storage vendor to support iSCSI Direct Attach to VMware hosts. NetApp also support it with their E-Series arrays, according to their support matrix.

0 Kudos