VMware Cloud Community
NuggetGTR
VMware Employee
VMware Employee

Issues with HP storage in ESXi 5

Hi all,

Im not a storage guy, but ive got a bit of a problem that has fallen my way I think and just wanted to see if others have seen this.

Recently upgraded to ESXi5 accross the board we have EMC storage both clarion and symms, everything is working perfectly on the storage front.

Now we are moving over to HP 9500 array with virtual LUNs, This is where the problem arrises.

apparently the storage is presented to all the hosts, but only about quarter of them can see it, and in most case only one HBA can see it.

The ones that can see it pick up 8 targets on the HBA all the rest can only see 7.

and in the logs for the ones that can see it are:

2012-02-22T03:50:51.100Z cpu8:4256)ScsiScan: 1098: Path 'vmhba1:C0:T7:L50': Vendor: 'HP      '  Model: 'OPEN-V          '  Rev: '5001'
2012-02-22T03:50:51.100Z cpu8:4256)ScsiScan: 1101: Path 'vmhba1:C0:T7:L50': Type: 0x0, ANSI rev: 2, TPGS: 0 (none)
2012-02-22T03:50:51.102Z cpu4:4256)ScsiScan: 1582: Add path: vmhba1:C0:T7:L50
2012-02-22T03:50:51.104Z cpu9:499134)ScsiPath: 4541: Plugin 'NMP' claimed path 'vmhba1:C0:T7:L50'
2012-02-22T03:50:51.105Z cpu9:499134)ScsiCore: 1455: Power-on Reset occurred on vmhba1:C0:T7:L50
2012-02-22T03:50:51.112Z cpu8:499134)vmw_psp_fixed: psp_fixedSelectPathToActivateInt:479: Changing active path from NONE to vmhba1:C0:T7:L50 for device "Unregistered Device".
2012-02-22T03:50:51.114Z cpu8:499134)VMWARE SCSI Id: Id for vmhba1:C0:T7:L50
0x60 0x06 0x0e 0x80 0x16 0x00 0x6b 0x00 0x00 0x01 0x00 0x6b 0x00 0x00 0x90 0x04 0x4f 0x50 0x45 0x4e 0x2d 0x56
2012-02-22T03:50:51.117Z cpu10:499134)ScsiDeviceIO: 5837: QErr is correctly set to 0x0 for device naa.60060e8016006b000001006b00009004.
2012-02-22T03:50:51.117Z cpu10:499134)ScsiDeviceIO: 6333: Could not detect setting of sitpua for device naa.60060e8016006b000001006b00009004. Error Not supported.
2012-02-22T03:50:51.120Z cpu9:499134)ScsiDevice: 3121: Successfully registered device "naa.60060e8016006b000001006b00009004" from plugin "NMP" of type 0

looks like it has issues but does add it and I can then create a datastore etc from that LUN.

The HBA's being used may be a little old they are the but the latest emulex driver seams to be working fine for the hundreds of other data stores

here are the proc read outs

Emulex LightPulse FC SCSI 8.2.2.105.36

HP BLc Emulex LPe1105-HP FC Mezz Option Kit on PCI bus 0000:12 device 01 irq 145 port 1

BoardNum: 1

ESX Adapter: vmhba2

Firmware Version: 2.80A4 (ZS2.80A4)

Portname: 50:06:0b:00:00:c3:52:46   Nodename: 50:06:0b:00:00:c3:52:47

SLI Rev: 3

   MQ: Unavailable

   NPIV Supported: VPIs max 127  VPIs used 0

   RPIs max 512  RPIs used 11   IOCBs inuse -2  IOCB max -1   txq cnt 0  txq max 0  txcmplq 0

   Vport List:

Link Up - Ready:

   PortID 0x6f8e00

   Fabric

   Current speed 4G

Port Discovered Nodes: Count 7

t0005 DID 6f001e WWPN 50:06:04:84:52:aa:3d:09 WWNN 50:06:04:84:52:aa:3d:09 qdepth 8192 max 3 active 0 busy 0

t0003 DID 6f02ef WWPN 50:06:01:6c:3b:20:2a:42 WWNN 50:06:01:60:bb:20:2a:42 qdepth 8192 max 81 active 0 busy 0

t0000 DID 6f06ef WWPN 50:06:01:65:3b:20:2a:42 WWNN 50:06:01:60:bb:20:2a:42 qdepth 8192 max 97 active 0 busy 0

t0004 DID 6f63ef WWPN 50:06:01:66:3b:20:4c:9f WWNN 50:06:01:60:bb:20:4c:9f qdepth 8192 max 54 active 0 busy 0

t0006 DID 6f66ef WWPN 50:06:01:6f:3b:20:4c:9f WWNN 50:06:01:60:bb:20:4c:9f qdepth 8192 max 60 active 0 busy 0

t0002 DID 6fffea WWPN 50:06:01:6a:3c:a0:21:9f WWNN 50:06:01:60:bc:a0:21:9f qdepth 8192 max 1 active 0 busy 0

t0001 DID 6fffee WWPN 50:06:01:63:3c:a0:21:9f WWNN 50:06:01:60:bc:a0:21:9f qdepth 8192 max 1 active 0 busy 0

/proc/scsi/lpfc820 # cat 3

Emulex LightPulse FC SCSI 8.2.2.105.36

HP BLc Emulex LPe1105-HP FC Mezz Option Kit on PCI bus 0000:12 device 00 irq 137 port 0

BoardNum: 0

ESX Adapter: vmhba1

Firmware Version: 2.80A4 (ZS2.80A4)

Portname: 50:06:0b:00:00:c3:52:44   Nodename: 50:06:0b:00:00:c3:52:45

SLI Rev: 3

   MQ: Unavailable

   NPIV Supported: VPIs max 127  VPIs used 0

   RPIs max 512  RPIs used 12   IOCBs inuse -1  IOCB max 10   txq cnt 0  txq max 0  txcmplq 0

   Vport List:

Link Up - Ready:

   PortID 0xb1e00

   Fabric

   Current speed 4G

Port Discovered Nodes: Count 8

t0005 DID 0b0023 WWPN 50:06:04:84:52:aa:3d:06 WWNN 50:06:04:84:52:aa:3d:06 qdepth 8192 max 2 active 0 busy 0

t0000 DID 0b03ef WWPN 50:06:01:64:3b:20:2a:42 WWNN 50:06:01:60:bb:20:2a:42 qdepth 8192 max 118 active 0 busy 0

t0003 DID 0b09ef WWPN 50:06:01:6d:3b:20:2a:42 WWNN 50:06:01:60:bb:20:2a:42 qdepth 8192 max 57 active 0 busy 0

t0006 DID 0b63ef WWPN 50:06:01:6e:3b:20:4c:9f WWNN 50:06:01:60:bb:20:4c:9f qdepth 8192 max 34 active 0 busy 0

t0001 DID 0b66ef WWPN 50:06:01:67:3b:20:4c:9f WWNN 50:06:01:60:bb:20:4c:9f qdepth 8192 max 37 active 0 busy 0

t0004 DID 0bffea WWPN 50:06:01:6b:3c:a0:21:9f WWNN 50:06:01:60:bc:a0:21:9f qdepth 8192 max 82 active 0 busy 0

t0002 DID 0bffee WWPN 50:06:01:62:3c:a0:21:9f WWNN 50:06:01:60:bc:a0:21:9f qdepth 8192 max 99 active 0 busy 0

t0007 DID 33000c WWPN 50:06:0e:80:16:00:6b:03 WWNN 50:06:0e:80:16:00:6b:03 qdepth 8192 max 2 active 0 busy 0

any ideas why the EMC storage is fine but HP isnt?

Im guessing because I cant see the targets on all hosts there has to be a zonning or masking issue of the switch?

I have a bad feeling these hba's arnt in the HCL for 5 but if it was a huge issue i thought i would of seen some errors with other storage while testing before upgrading but there was none, and still isnt any except for the hp storage and thats only when it can see it, all the hosts that cant see the target dont have any errors because it cant see anything to error on Smiley Happy

any ideas?

Cheers

________________________________________ Blog: http://virtualiseme.net.au VCDX #201 Author of Mastering vRealize Operations Manager
0 Kudos
2 Replies
mcowger
Immortal
Immortal

*hundreds* of other LUNs?

How many?  There is a limit of 256 - are you sure you are not exceeding that?

--Matt VCDX #52 blog.cowger.us
0 Kudos
NuggetGTR
VMware Employee
VMware Employee

yeah no we havent exceeded that,

there would be maybe 500 luns but there is 6 clusters, no one cluster is close to the 256 limit.

________________________________________ Blog: http://virtualiseme.net.au VCDX #201 Author of Mastering vRealize Operations Manager
0 Kudos