VMware Cloud Community
TimMcGee
Contributor
Contributor

0 Gbps - ESXi 6.5 update 3 host not connecting to storage

Hello,

We have ESXi 6.5 update 3 running on Cisco B200 blades using 6248 Fabric interconnects.

We have purchased new 6454 fabric interconnects and I have moved 1 chassis and 1 blade to the new FIs for testing.

The blade/ESXi host is connecting and pinging on the network good.

However, if I try to rescan storage it takes 30 minutes and fails. If I try to reboot it takes over an hour and doesn't really attach all storage successfully.

If I unmap all storage it reboots and rescans as normal. It sees the array just fine.

The zoning is connected at 8 Gbps and the FIs say connected at 8 Gbps.

However, VMware says the speed is 0 Gbps. see below output.

I have checked compatibility lists and it seems our fnic, enic, vmware version and ucs version is all compatible.

Has anyone seen this?

Any help is much appreciated!

[root@kcesx9:~] esxcli storage san fc list

   Adapter: vmhba1

   Port ID: 0A3A03

   Node Name: 20:00:00:25:b5:a5:02:df

   Port Name: 20:00:00:25:b5:a1:02:df

   Speed: 0 Gbps

   Port Type: NPort

   Port State: ONLINE

   Model Description:

   Hardware Version:

   OptionROM Version:

   Firmware Version:

   Driver Name:

   DriverVersion:

Adapter: vmhba2

   Port ID: 0B1203

   Node Name: 20:00:00:25:b5:a5:02:df

   Port Name: 20:00:00:25:b5:b1:01:df

Speed: 0 Gbps

   Port Type: NPort

   Port State: ONLINE

   Model Description:

   Hardware Version:

   OptionROM Version:

   Firmware Version:

   Driver Name:

   DriverVersion:

==================

These are some of the naa's that popped up with connectivity issues.

naa.6001738c7c80534900000000000134a8

naa.6001738c7c8053490000000000021977

naa.6001738c7c805349000000000002204c

naa.6001738c7c805349000000000001349f

naa.6001738c7c8053490000000000013499

    

In the logs:

2020-07-06T19:19:51.900Z cpu11:65844)WARNING: HBX: 2580: Failed to cleanup VMFS heartbeat on volume5e604295-151f2b8e-871a-0025b500a516: No connection

2020-07-06T19:19:51.900Z cpu23:66148)ScsiDeviceIO: 3015: Cmd(0x439590d03b40) 0x28, CmdSN 0xa from world 74231 to dev "naa.6001738c7c805349000000000002204c" failed H:0x8 D:0x0 P:0x0 Invalid sense data: 0x0 0x0 0x0.

2020-07-06T19:19:51.900Z cpu23:74231)LVM: 5953: PE grafting failed for device naa.6001738c7c805349000000000002204c:1, vol 5e6079a4-96282218-633b-0025b500a516/7688724183283841769: Timeout

2020-07-06T19:19:51.900Z cpu11:65844)WARNING: HBX: 2580: Failed to cleanup VMFS heartbeat on volume5e604295-151f2b8e-871a-0025b500a516: No connection

2020-07-06T19:19:51.900Z cpu11:65844)Vol3: 3073: Error closing the volume: No connection. Eviction fails.

0 Kudos
4 Replies
scott28tt
VMware Employee
VMware Employee

When you say VMware you mean ESXi.

That‘s like saying Microsoft when you could be talking about an Xbox, Azure, a Surface Pro or Word.


-------------------------------------------------------------------------------------------------------------------------------------------------------------

Although I am a VMware employee I contribute to VMware Communities voluntarily (ie. not in any official capacity)
VMware Training & Certification blog
0 Kudos
TimMcGee
Contributor
Contributor

Yes. you are correct. Been a long week Smiley Happy

ESXi host not connecting to storage in a new UCS environment.

It shows 0 Gbps for the speed.

Thanks!

0 Kudos
abhilashhb
VMware Employee
VMware Employee

Is this ESXi host part of a cluster or just a standalone host?

VMware Knowledge Base: Looking at this KB and the SCSI sense code in your error in the log snippet, the error points to VMK_SCSI_HOST_RESET= 0x08 or 0x8 - This status is returned when the HBA driver aborts I/O. It also occurs if the HBA resets the target.

This looks like there is either a driver mismatch or some kind of issue with the HBA card.

Can you check the driver version on the UCS hardware and it's compatibility with ESXi 6.5?

Abhilash B
LinkedIn : https://www.linkedin.com/in/abhilashhb/

0 Kudos
TimMcGee
Contributor
Contributor

Hello,

Thank you for the reply.

It is part of a cluster. I have tried this with the host in and not in a cluster.

I can map the host to storage and in VCSA under storage adapters I can see all the luns.

However, when I click on Datastores, I don't see any or only some of them. It seems to have an issue at the file system level. I can't add a new datastore either.

and I am still at 0 Gbps when running -->esxcli storage san fc list

The port state is online also.

It seems to have to be a driver issue, but the enic, fnic and the UCS/HBA are all at compatible levels.

0 Kudos