VMware Cloud Community
habibalby
Hot Shot
Hot Shot

Best Practice DS3512 iSCSI Dual CTRL with vSphere ESX 5.0

Hello,
My DS3512 is ready to host three ESX5.0.0,623860 servers in iSCSI Mode.

What is the best practice to configure the iSCSI and ESX? Is it possible to use Round Robin Mulitpathing with this kind of Storage?

What is the supported and recommended way by VMWare/IBM?

Thanks,

Best Regards, Hussain Al Sayed Consider awarding points for "correct" or "helpful".
0 Kudos
11 Replies
logiboy123
Expert
Expert

Have you tried the redbook from IBM?

http://www.redbooks.ibm.com/abstracts/sg247914.html?Open

habibalby
Hot Shot
Hot Shot

Hello,

Yes, I have gone through this book, but it doesn't mentioned how to utilize the entier 8 iSCSI ports in the DS3500. In VMWare support, I have read that it supports only MRU

Thanks,

Hussain

Best Regards, Hussain Al Sayed Consider awarding points for "correct" or "helpful".
0 Kudos
w1ll1ng
Enthusiast
Enthusiast

Habibalby,

Any update on what you ended up doing? Best practice for iSCSI with the 3512.  Seems like a different approach because I don't believe can aggregate ports and you should use different ip's for each int. Also any confirmation on PSP?

0 Kudos
habibalby
Hot Shot
Hot Shot

Hello,
I have ended up having this storage as Repository SAN for my backup, but all the documentations says it support MPIO with RR but if one nic fails, you will get I/O Error and drops in the data transfare, however the LUN will not disappear from the vSphere.

If you disable the jumbo frame to normal MTU 1500 and LUN as MRU, data will continue to transfer but very slow, don't know the reason nor the support knows this issue.

I have a punch of emails goes back and forth and the latest one yesterday and they claimed that the Jumbo Frame and MPIO configured with non IBM supported switches, in fact the IBM San and the EMC AX San connected to the same switches, and I've never faced this issue with the EMC storage.

I have blogged about it in my blog you can have a look and I'm ready to do any testing for you http://dailyVM tech.WordPress.com

Best Regards, Hussain Al Sayed Consider awarding points for "correct" or "helpful".
0 Kudos
habibalby
Hot Shot
Hot Shot

Hello,

I have ended up having this storage as Repository SAN for my backup, but all the documentations says it support MPIO with RR but if one nic fails, you will get I/O Error and drops in the data transfare, however the LUN will not disappear from the vSphere.

If you disable the jumbo frame to normal MTU 1500 and LUN as MRU, data will continue to transfer but very slow, don't know the reason nor the support knows this issue.

I have a punch of emails goes back and forth and the latest one yesterday and they claimed that the Jumbo Frame and MPIO configured with non IBM supported switches, in fact the IBM San and the EMC AX San connected to the same switches, and I've never faced this issue with the EMC storage.

I have blogged about it in my blog you can have a look and I'm ready to do any testing for you http://dailyVM tech.WordPress.com<http://tech.WordPress.com>

Thanks

Best Regards, Hussain Al Sayed Consider awarding points for "correct" or "helpful".
0 Kudos
w1ll1ng
Enthusiast
Enthusiast

Hi,

It does work well with firmware above. 7.22.xx. MPIO with RR.  No errors with dead nics however no sense in using all 8 nics. Use the amount that totals coming from hosts respective to each controller. True actice/active environment. This is great from such a basic SAN. True ALUA environment.

0 Kudos
habibalby
Hot Shot
Hot Shot

Hi,

I'm cruise to know how you have configured it and how many number of pNICs you have on your host and how you have managed the vSwitch configuration ????

Best Regards, Hussain Al Sayed Consider awarding points for "correct" or "helpful".
0 Kudos
w1ll1ng
Enthusiast
Enthusiast

Hey,

Each host had 2 pNics hence used 2 nics per controller head. vSwitch configured with recommended active/unused fashion pNics. This is approach is in most documentation at the moment. No LACP or etherchanel on pSwitch side so redundancy is used by native vmware MPIO software. Each Nic on the same controller has to be in different subnet so group Nics in same subnet from each controller. Each pNic would then be on these resepctive subnets. ALUA SATP and PSP would be supported from the firmware versions mentioned. I also used RR algorithm to share load and changed IOP's threshold to 8 from 1000 (this can be changed on the fly). Should use two switches for network redundancy as well. Hope this helps...

habibalby
Hot Shot
Hot Shot

Thanks for your response,, have you tried bench marking the disk speed with MRU default and with Round Robin?

Best Regards, Hussain Al Sayed Consider awarding points for "correct" or "helpful".
0 Kudos
habibalby
Hot Shot
Hot Shot

How many path do you see in total? I have just done the same as what you have described;

CTRL-A

  •      Port-3 10.10.30.1
  •      Port-4 10.10.30.2

CTRL-B

  • Port-3 10.10.40.1
  • Port-4 10.10.40.2

In ESX, only one vSwitch connected to two pNICs. vmnic3 and vmnic9 I have created two VMKernel Portgroups for iSCSI, iSCSI-01 and iSCSI-02 vmnic3 used for iSCSI-1 and vmnic9 unused for iSCSI-1, vmnic9 is used for iSCSI-2 and unused for iSCSI-1.

I did the vmkernel ports binding and added the two iSCSI ports and I changed the datastore path to Round Robin, but all I can see is 4 paths;

10.10.40.1 Active(I/O)

10.10.40.2 Active(I/O)

10.10.30.1 Standby

10.10.30.2 Standby

Best Regards, Hussain Al Sayed Consider awarding points for "correct" or "helpful".
0 Kudos
w1ll1ng
Enthusiast
Enthusiast

Hi,

Seeing Standby clearly indicates this not using the desired MPIO. You should see 4 paths to each device with all Active but only two Active (I/O) and that would be the two to the owning head.The other 2 are to the other head where nvram comes into play if failure on the first head so no transfer of device to new owning head (old approach). You have to make sure you have the new storage firmware that supports this feature. You will know by your SATP detection. After storage upgrade you may have to make the devices host groups on the storage enabled with this function checked before reconnecting. Hope this helps.

0 Kudos