VMware Cloud Community
iliketea
Contributor
Contributor

datastores not consumed

Hi,

Seeing a bit of a strange one - we have 3PAR LUNs presented to a 6.5 HPE blade cluster via FC and each datastore is presented too all hosts.

They all work fine however on a host reboot some of the datastores are shown as 'Not Consumed' as their operational state in Configure > Storage > Storage Devices. This hasn't had an effect on operations thus far as most of them show up connected after reboot and DRS must only migrate workloads back to that are housed on connected storage..

We only noticed it after bringing hosts back into use and a storage re-scan brings any datastores back instantly.

Has anyone seen 'Not consumed' before or have any pointers?

I will try re-create the behaviour in dev today.

Thank you

Update 23/04/2019:

Rebooting a host recreated the issue. Some LUNs have an operational state of attached but are 'Not Consumed' and do not show up in the list of datastores.

datastores.png

Here's the details of one that this happened to:

naa.60002ac00000000000000a100001bdc7

   Device Display Name: 3PARdata Fibre Channel Disk (naa.60002ac00000000000000a100001bdc7)

   Storage Array Type: VMW_SATP_ALUA

   Storage Array Type Device Config: {implicit_support=on; explicit_support=off; explicit_allow=on; alua_followover=on; action_OnRetryErrors=off; {TPG_id=257,TPG_state=AO}{TPG_id=258,TPG_state=STBY}}

   Path Selection Policy: VMW_PSP_RR

   Path Selection Policy Device Config: {policy=rr,iops=1,bytes=10485760,useANO=0; lastPathIndex=16: NumIOsPending=0,numBytesPending=0}

   Path Selection Policy Device Custom Config:

   Working Paths: vmhba1:C0:T12:L35, vmhba1:C0:T8:L35, vmhba1:C0:T9:L35, vmhba1:C0:T13:L35, vmhba1:C0:T11:L35, vmhba1:C0:T15:L35, vmhba1:C0:T14:L35, vmhba1:C0:T10:L35, vmhba0:C0:T8:L35, vmhba0:C0:T12:L35, vmhba0:C0:T9:L35, vmhba0:C0:T13:L35, vmhba0:C0:T11:L35, vmhba0:C0:T14:L35, vmhba0:C0:T15:L35, vmhba0:C0:T10:L35

   Is USB: false

And here's one that is working from same host

naa.60002ac000000000000003240001bdc6

   Device Display Name: 3PARdata Fibre Channel Disk (naa.60002ac000000000000003240001bdc6)

   Storage Array Type: VMW_SATP_ALUA

   Storage Array Type Device Config: {implicit_support=on; explicit_support=off; explicit_allow=on; alua_followover=on; action_OnRetryErrors=off; {TPG_id=258,TPG_state=STBY}{TPG_id=257,TPG_state=AO}}

   Path Selection Policy: VMW_PSP_RR

   Path Selection Policy Device Config: {policy=rr,iops=1,bytes=10485760,useANO=0; lastPathIndex=3: NumIOsPending=0,numBytesPending=0}

   Path Selection Policy Device Custom Config:

   Working Paths: vmhba1:C0:T0:L23, vmhba1:C0:T7:L23, vmhba0:C0:T0:L23, vmhba1:C0:T4:L23, vmhba1:C0:T1:L23, vmhba1:C0:T2:L23, vmhba1:C0:T6:L23, vmhba1:C0:T3:L23, vmhba1:C0:T5:L23, vmhba0:C0:T1:L23, vmhba0:C0:T5:L23, vmhba0:C0:T4:L23, vmhba0:C0:T7:L23, vmhba0:C0:T2:L23, vmhba0:C0:T6:L23, vmhba0:C0:T3:L23

   Is USB: false

A storage re-scan on the host brings them back.

Has anyone got any pointers to what to look into?

0 Kudos
3 Replies
Amir49
Contributor
Contributor

Hi,

I have this issue and finally found that all VM which located on the "not Consumed" datastore should be relocated into other datastores after that ESXi will automatically the datastore.

You can find that ESXi detected the datastores but because something prevents (sometimes the snapshot also is the reason)  it from remounting datastore with different LUN numbers it shows "not Consumed".

by this command: ls -al /vmfs/volumes

you can find that the symbolic links are broken but you cannot remove the link manually because the datastore has been locked by VMs.

Regards,

Amir

0 Kudos
johanyung
Contributor
Contributor

Hi 

 

I think I have the same problem. But i don´t understand how to solve it.

What did you do ?

0 Kudos
Amir49
Contributor
Contributor

Hi,

Try to relocate all VMs inside the broken Datastores to healthy DS. then you will see ESXi itself detect the broken Symbolic link and repair that. using the command I mentioned in the last post you can see which one is corrupt (RED color) and which one is healthy (Blue color).

Usually, when Datastore is busy, ESXi host cannot repair the broken SYMink.

0 Kudos