VMware Cloud Community
ezequielcarson
Enthusiast
Enthusiast

Reclaimed heartbeat for volume

Hello,

Im getting these messages in many hosts of the vsan cluster.

2014-11-29T01:14:48.834Z cpu13:32888)HBX: 258: Reclaimed heartbeat for volume 54718a58-1fe911f6-9b43-002590f9c358 (588a7154-7605-87d3-9eed-002590f9c358): [Timeout] Offset 3387392

i have ping to all clusters, disks are healthy,

any idea how can i solve this?

Txs

Ezequiel

0 Kudos
3 Replies
CHogan
VMware Employee
VMware Employee

Is it always the same host?

Is it always the same volume? 54718a58-1fe911f6-9b43-002590f9c358

You can try examining the stats of the various disk - see if one of the error counters is incrementing? esxcli storage core device stats get

Have you verified that all the hardware components are on the HCL/VCG - controller & SSD/flash device. Have you checked the driver and firmware levels?

Do you run any other feature on the controller? e.g. HP Smart Path or similar. Can you turn these off?

Any significant load when these errors occur? Backup for example?

HTH

Cormac

http://cormachogan.com
0 Kudos
ezequielcarson
Enthusiast
Enthusiast

This is happening in every host. When this occurs all vsan hosts start losing management from vcenter.

You can log into the host using SSH, but neither esxcli or vcenter work.

I verified any networks issue but I found no errors.

I will check the features on the controller.

0 Kudos
virtualworld199
Contributor
Contributor

This event indicates that the ESX host's connectivity to the volume (for which this event was generated) degraded due to the inability of the host to renew its heartbeat for period of approximately 16 seconds (the VMFS lock breaking lease timeout). After the periodic heartbeat renewal fails, VMFS declares that the heartbeat to the volume has timed out and suspends all I/O activity on the device until connectivity is restored or the device is declared inoperable.

There are two components to this:

  • Heartbeat Interval = 3 Seconds

  • Heartbeat lease wait timeout = 16 Seconds
A host indicates its liveness by periodically (every 3 seconds) performing I/O to its heartbeat on a given volume. Therefore, if no activity is seen on the host's heartbeat slot for a period of time, then we can conclude that the host has lost connectivity to the volume. This wait time is a little over 5 heartbeat intervals or 16 seconds to be precise.

Example

If an  ESX host has mounted a volume san-lun-100 from device naa.60060160b4111600826120bae2e3dd11:1 and loses connectivity (due to a cable pull, disk array failure, and so on) to the device for a period exceeding 16 seconds, the following error message appears:

Lost access to volume 496befed-1c79c817-6beb-001ec9b60619 (san-lun-100) due to connectivity issues.  Recovery attempt is in progress and outcome will be reported shortly.

Impact

All I/O, metadata operations to the specific volume from COS, user interface (vSphere Client), or virtual machines are internally queued and retried for some duration of time. If the volume or storage device connectivity is not restored within that duration of time, such I/O operations fail. This might have an impact on already running virtual machines as well as any new power on operations by virtual machines.

 

Solution

To resolve this issue:

  1. Connect to the vCenter Server using vSphere Client.
  2. Select the Storage View tab to map the HBA (Host Bus Adapter) associated to the affected VMFS volume.

  3. Follow the steps provided in Troubleshooting fibre channel storage connectivity (1003680) to identify and resolve the path inconsistencies to the LUN.

  4. If connections are restored, VMFS automatically recovers the heartbeat on the volume and continues the operation.

To resolve this issue using the service console:

  1. Connect to the ESX host’s service console.
  2. Run the following commands:
    1. Query VMFS datastore properties. 

      Example:

      # vmkfstools –P san-lun-100
      File system label (if any): san-lun-100
      Mode: public
      Capacity 80262201344 (76544 file blocks * 1048576), 36768317440 (35065 blocks) avail
      UUID: 49767b15-1f252bd1-1e57-00215aaf0626
      Partitions spanned (on "lvm"): naa.60060160b4111600826120bae2e3dd11:1
    1. Use esxcfg-mpath along with the naa ID of the LUN (Logical Unit Number) output from the above command to identify the state of all the paths to affected LUN.

      Example:

      # esxcfg-mpath -b -d naa.60060160b4111600826120bae2e3dd11
      naa.60060160b4111600826120bae2e3dd11 : DGC Fibre Channel Disk (naa.60060160b4111600826120bae2e3dd11) vmhba0:C0:T0:L0 LUN:0 state:active fc Adapter:
      WWNN: 20:00:00:00:c9:7d:6c:e0 WWPN: 10:00:00:00:c9:7d:6c:e0  Target: WWNN: 50:06:01:60:b0:22:1f:dd WWPN: 50:06:01:60:30:22:1f:dd vmhba0:C0:T1:L0 LUN:0 state:standby fc Adapter:
      WWNN: 20:00:00:00:c9:7d:6c:e0 WWPN: 10:00:00:00:c9:7d:6c:e0  Target: WWNN: 50:06:01:60:b0:22:1f:dd WWPN: 50:06:01:68:30:22:1f:dd
  3. Follow the steps provided in Troubleshooting fibre channel storage connectivity (1003680) to identify and resolve the path inconsistencies to the LUN.
  4. If connections are restored, VMFS automatically recovers the heartbeat on the volume and continues the operation.

Note: For additional information, see Troubleshooting LUN connectivity issues on ESXi/ESX hosts (1003

Please mark my Answer correct and like if it helped. Thanks

0 Kudos