VMware Cloud Community
mnaitvpro
Enthusiast
Enthusiast

ESXi Lost access to volume


Hello Gurus,

Offlate i noticed an event log entry in vSphere client twice in short span of less than a day related to local storage as follows:

Lost access to volume

4f4c8bc0-4d13eab8-c8fc-5cf3fc09c3fa (vms-1)

due to connectivity issues. Recovery attempt is in

progress and outcome will be reported shortly.

info

8/18/2013 2:01:22 AM

vms-1

and immediately

Successfully restored access to volume 4f4c8bc0-

4d13eab8-c8fc-5cf3fc09c3fa (vms-1) following

connectivity issues.

info

8/18/2013 2:01:22 AM

nesxi1.x.x

The event details itself recommends "Ask VMware" link leads to VMware KB: Host Connectivity Degraded

and

this VMware KB: Host Connectivity Restored

As per the KB VMware is referring to SAN LUN, but in our case its the local storage, kindly shed some info as to why the local storage would lost its connectivity.

Note: all the local disk are on RAID-10.

thanks

0 Kudos
25 Replies
EliassenO
Contributor
Contributor

Hi, Any updates on this issue ?

0 Kudos
malabelle
Enthusiast
Enthusiast

HI,

so we installed the following driver and it seems stable since a couple of weeks.

VMW-ESX-5.5.0-qlnativefc-1.1.20.0-1604804

We use it on 5.5 and 6.0u1 hosts.

vExpert '16, VCAP-DCA, VCAP-DCD
0 Kudos
EstherP
Enthusiast
Enthusiast

Hello,

I suffered this same problem in HP servers G7, and after some investigation, the problem is related to  hpsa driver.

The only hpsa driver which does NOT cause this behavior is this version: scsi-hpsa   5.5.0.60-1OEM.550.0.0.1331820

If you have upgraded your ESX from version 5.1 to version 5.5 or version 6.0, you will run into the same problem in case of you have G7 servers.

What I did is to implement an downgrade in the ESX servers G7 running with ESXi version 5.5 and 6.0, and they are working properly without disconnections.

If you have local storage, this can be a big problem, because I could see for example in virtual filers, that this problems caused a 2 or 3 ping lost everytime.

Version without problems: scsi-hpsa   5.5.0.60-1OEM.550.0.0.1331820

Versions with problems: 5.5.0.74,  5.5.0.84 and  6.0.0.114.

If you upgrade the ESX later on, you will have to do the same downgrade operation.

0 Kudos
Medo060
Contributor
Contributor

Dear Malabelle,

Are the issue solved for you; because i have the same hardware and OS version.

with best regards,

0 Kudos
malabelle
Enthusiast
Enthusiast

yes after driver installation it is good.

vExpert '16, VCAP-DCA, VCAP-DCD
0 Kudos
FloRod621
Contributor
Contributor

I have the same issue with just one UCS blade.  Have a total of 6 on same Chassis.  Running 5.5 on all of them.

  • Updated FNIC and ENIC drivers on the host itself.
  • Moved blade to a different slot on the chassis to rule that out.
  • Checked the logs on the Host itself and see

No correlator for vob.vmfs.heartbeat.timedout
[vmfsCorrelator] 264959970134us: [esx.problem.vmfs.heartbeat.timedout] 54daccec-748bb3a6-2ac0-0025b500000f Datastore
No correlator for vob.vmfs.heartbeat.timedout
[vmfsCorrelator] 264959970322us: [esx.problem.vmfs.heartbeat.timedout] 54dacc8a-028436f2-3cec-0025b500000f Datastore
No correlator for vob.vmfs.heartbeat.timedout
[vmfsCorrelator] 264960581844us: [esx.problem.vmfs.heartbeat.timedout] 54dacccf-7fd2fbca-b8ac-0025b500000f Datastore
No correlator for vob.vmfs.heartbeat.timedout
[vmfsCorrelator] 264962970092us: [esx.problem.vmfs.heartbeat.timedout] 54dacc75-b097216f-18a6-0025b500000f Datastore
No correlator for vob.vmfs.heartbeat.recovered
[vmfsCorrelator] 264964877110us: [esx.problem.vmfs.heartbeat.recovered] 54daccec-748bb3a6-2ac0-0025b500000 Datastore

  • Working with Cisco support and they confirmed not errors on the NIC logs of the affected host.

I believe its an issue with blade itself.  I have uploaded tons of logs to Cisco support and waiting to hear back from them.

0 Kudos