VMware Cloud Community
ManivelR
Hot Shot
Hot Shot

VSAN data store capacity

Hi Team,

I have one doubt about VSAN data store free space.

My setup is as follows:-

6 ESXi hosts with all flash VSAN---> Each host 2 * 1 TB(cache disk) and 6 * 2 TB (capacity disk).

Totally---> 12 * 1 TB cache disk(all 6  ESXi hosts)----> 12 TB

Totally---> 6 * 2 TB capacity disk(all 6  ESXi hosts)----> 72 TB

Enabled dedupe and compression.

storage policy is RAID 6 & FTT=2.

one one VCSA appliance(small) is running with 350 GB of disk in this VSAN DS.No other VMs are running.

I dont know why the used space is showing as 6.18 TB? Could some one pls clarify?

pastedImage_1.png

any other rule needs to add here?

pastedImage_2.png

Thanks,

Manivel R

Reply
0 Kudos
14 Replies
TheBobkin
Champion
Champion

Hello Manivel,

You have the answer highlighted in your screenshot - it is system overhead using the space aside from VM usage likely from fsOverhead for dedupe all being allocated up-front based on the datastore capacity - this would of course be a lower proportion (though not lower amount) compared to VM usage as the datastore is filled, how this is allocated is changed in latest patches (6.7 U1/6.5 P03):

VMware Knowledge Base

Also, where you are looking at that is not the best place for monitoring vsanDatastore usage break-down, that would be

Cluster > Monitor > vSAN > Capacity

Monitor vSAN Capacity

Additionally, just because you only have one VM registered/running doesn't mean that is the only VM data stored on this datastore, it is pretty clear from your screenshot that there are multiple TB of data used by VMs. This can be checked easily via datastore browser or RVC (e.g. vsan.obj_status_report -t).

Bob

Reply
0 Kudos
ManivelR
Hot Shot
Hot Shot

Hi Bob,

Thanks for your response.I dis setup this environment today only and current have only one VM.

Im using VCSA which is also in thin provisioning.The VCSA size is around 35 GB as of now.

I just checked through HTML 5 client and got the details.

pastedImage_0.png

pastedImage_1.png

pastedImage_2.png

Im using VSAN version v 6.7.0 Update 1.

I have only one final doubt.I have created a storage policy without "Number of disks stripes per object". Please suggest whether i need to include this or not ?

pastedImage_3.png

Thanks,

Manivel R

Reply
0 Kudos
TheBobkin
Champion
Champion

Hello Manivel,

Please confirm the exact build version of the vCSA (e.g. 6.7 U1 vCSA is: 10244745) and the hosts in the cluster (check them all not just one) - potentially fsOverhead pre-allocated more in previous builds and/or it is just going to take a relatively large chunk initially.

Sorry but it says you will need 7.62TB without dedupe (2.64TB used + 4.98TB savings) so either you have something misconfigured and it is reporting incorrectly or you actually have multiple TB data used here - this might not be VMs, things like ISOs count too - please check this correctly via the methods I said above.

Looks like you are still looking at that from the page with less information, read what I said in the above content and the link.

"I have created a storage policy without "Number of disks stripes per object". Please suggest whether i need to include this or not ?"

Whether you want to stripe sub-components more than the default (SW=1) or not is your choice as administrator of the environment, striping will increase the number of (sub-)components but potentially increase the throughput.

Bob

Reply
0 Kudos
ManivelR
Hot Shot
Hot Shot

Hi Bob,

My VCSA version is

pastedImage_0.png

ESXi version is

pastedImage_1.png

RVC output

pastedImage_0.png

Thanks,

Manivel R

Reply
0 Kudos
TheBobkin
Champion
Champion

Hello Manivel,

Versions are fine and compatible.

However, you clearly have more than just one vCSA on this vsanDatastore as you have 30 Objects there - with the same RVC command use the -t switch to tell what the Objects are as I said above. If a lot of them are unassociated then you will need to query them individually e.g. >vsan.object_info <pathToCluster> <ObjectUUID> or via CLI on host with #esxcli vsan debug object list

Bob

Reply
0 Kudos
ManivelR
Hot Shot
Hot Shot

Hi Bob,

Thank you.This is the output.It indicates one VCSA clone is there(in offline).I forgot to tell  about this clone earlier to you.Sorry for that.

Total v1 objects: 0

Total v2 objects: 0

Total v2.5 objects: 0

Total v3 objects: 0

Total v5 objects: 0

Total v6 objects: 0

Total v7 objects: 30

+-----------------------------------------------------------------------------------------------+---------+---------------------------+

| VM/Object                                                                                     | objects | num healthy / total comps |

+-----------------------------------------------------------------------------------------------+---------+---------------------------+

| VC-01                                                                       | 15      |                           |

|    [vsanDatastore] 65ee6f5c-ff22-3854-43d3-90b11c5954de/VC-01.vmx           |         | 6/6                       |

|    [vsanDatastore] 65ee6f5c-ff22-3854-43d3-90b11c5954de/VC-01.vmdk          |         | 6/6                       |

|    [vsanDatastore] 65ee6f5c-ff22-3854-43d3-90b11c5954de/VC-01_1.vmdk        |         | 6/6                       |

|    [vsanDatastore] 65ee6f5c-ff22-3854-43d3-90b11c5954de/VC-01_2.vmdk        |         | 6/6                       |

|    [vsanDatastore] 65ee6f5c-ff22-3854-43d3-90b11c5954de/VC-01_3.vmdk        |         | 6/6                       |

|    [vsanDatastore] 65ee6f5c-ff22-3854-43d3-90b11c5954de/VC-01_4.vmdk        |         | 6/6                       |

|    [vsanDatastore] 65ee6f5c-ff22-3854-43d3-90b11c5954de/VC-01_5.vmdk        |         | 6/6                       |

|    [vsanDatastore] 65ee6f5c-ff22-3854-43d3-90b11c5954de/VC-01_6.vmdk        |         | 6/6                       |

|    [vsanDatastore] 65ee6f5c-ff22-3854-43d3-90b11c5954de/VC-01_7.vmdk        |         | 6/6                       |

|    [vsanDatastore] 65ee6f5c-ff22-3854-43d3-90b11c5954de/VC-01_8.vmdk        |         | 6/6                       |

|    [vsanDatastore] 65ee6f5c-ff22-3854-43d3-90b11c5954de/VC-01_9.vmdk        |         | 6/6                       |

|    [vsanDatastore] 65ee6f5c-ff22-3854-43d3-90b11c5954de/VC-01_10.vmdk       |         | 6/6                       |

|    [vsanDatastore] 65ee6f5c-ff22-3854-43d3-90b11c5954de/VC-01_11.vmdk       |         | 6/6                       |

|    [vsanDatastore] 65ee6f5c-ff22-3854-43d3-90b11c5954de/VC-01_12.vmdk       |         | 6/6                       |

|    [vsanDatastore] 65ee6f5c-ff22-3854-43d3-90b11c5954de/VC-01-70385002.vswp |         | 6/6                       |

| vc clone-feb 22                                                                               | 14      |                           |

|    [vsanDatastore] 700a705c-c75b-094a-1760-d4ae528879d5/vc clone-feb 22.vmx                   |         | 6/6                       |

|    [vsanDatastore] 700a705c-c75b-094a-1760-d4ae528879d5/vc clone-feb 22.vmdk                  |         | 6/6                       |

|    [vsanDatastore] 700a705c-c75b-094a-1760-d4ae528879d5/vc clone-feb 22_1.vmdk                |         | 6/6                       |

|    [vsanDatastore] 700a705c-c75b-094a-1760-d4ae528879d5/vc clone-feb 22_2.vmdk                |         | 6/6                       |

|    [vsanDatastore] 700a705c-c75b-094a-1760-d4ae528879d5/vc clone-feb 22_3.vmdk                |         | 6/6                       |

|    [vsanDatastore] 700a705c-c75b-094a-1760-d4ae528879d5/vc clone-feb 22_4.vmdk                |         | 6/6                       |

|    [vsanDatastore] 700a705c-c75b-094a-1760-d4ae528879d5/vc clone-feb 22_5.vmdk                |         | 6/6                       |

|    [vsanDatastore] 700a705c-c75b-094a-1760-d4ae528879d5/vc clone-feb 22_6.vmdk                |         | 6/6                       |

|    [vsanDatastore] 700a705c-c75b-094a-1760-d4ae528879d5/vc clone-feb 22_7.vmdk                |         | 6/6                       |

|    [vsanDatastore] 700a705c-c75b-094a-1760-d4ae528879d5/vc clone-feb 22_8.vmdk                |         | 6/6                       |

|    [vsanDatastore] 700a705c-c75b-094a-1760-d4ae528879d5/vc clone-feb 22_9.vmdk                |         | 6/6                       |

|    [vsanDatastore] 700a705c-c75b-094a-1760-d4ae528879d5/vc clone-feb 22_10.vmdk               |         | 6/6                       |

|    [vsanDatastore] 700a705c-c75b-094a-1760-d4ae528879d5/vc clone-feb 22_11.vmdk               |         | 6/6                       |

|    [vsanDatastore] 700a705c-c75b-094a-1760-d4ae528879d5/vc clone-feb 22_12.vmdk               |         | 6/6                       |

+-----------------------------------------------------------------------------------------------+---------+---------------------------+

| Unassociated objects                                                                          |         |                           |

|    a0f26f5c-d4cb-65f6-fb9d-d4ae52886942                                                       |         | 3/3                       |

+-----------------------------------------------------------------------------------------------+---------+---------------------------+

+------------------------------------------------------------------+

| Legend: * = all unhealthy comps were deleted (disks present)     |

|         - = some unhealthy comps deleted, some not or can't tell |

|         no symbol = We cannot conclude any comps were deleted    |

+------------------------------------------------------------------+

Thanks,

Manivel R

Reply
0 Kudos
TheBobkin
Champion
Champion

Hello Manivel,

Even with a clone of the vCSA the VM usage doesn't add up to anywhere near what is showing there (provided they are actually only ~350GB each).

Can you check the following connected via SSH to any host in cluster:

# cd /vmfs/volumes/vsanDatastore (assuming 'vsanDatastore' if you have left default name)

# du -hd 2

# df -h

By any chance was the vSAN cluster created while the hosts were on 6.7 U1 while vCenter was 6.7 GA (or any pre-U1 build)? I ask as I have seen this cause many weird behaviours.

Can you attach (or send me if don't want public) the output from #esxcli vsan debug object list ?

Bob

Reply
0 Kudos
ManivelR
Hot Shot
Hot Shot

Hi Bobkin,

I sent to you.Please check and share your inputs.

Thank you,

Manivel RR

Reply
0 Kudos
TheBobkin
Champion
Champion

Hello Manivel,

All looks normal there e.g. no unaccounted for namespaces (just vC, vC-clone, vsan.stats) and none containing anything unexpected.

Was vSAN ever installed on these nodes previously? The only other thing I can think to check would be whether there are leftover LSOM-Objects (e.g. from performance or other test Objects) or node/datastore entries that are being referenced incorrectly, this should give an indication of this (e.g. if there are a ton more LSOM-Objects than would make sense from Objects in inventory):

# cmmds-tool find -f python | grep "\"type\"" | sort | uniq -c

If that doesn't give more insight then you should open an SR with GSS for further analysis.

Bob

Reply
0 Kudos
ManivelR
Hot Shot
Hot Shot

Hi Bob,

Thanks for the information.I will raise a case with VMware on this.

On last december 2018,we first tested the vSAN using three ESXi servers and post testing we introduced another 3 ESXi servers.

i.e Our prod setup will be 6 ESXi servers for vSAN.

Before implementing the vSAN,i completely wiped out the VSAN data(first 3 ESXi servers) and then introduced the VSAN with 6 ESXi servers.

The below values are same from all the 6 ESXi servers.

pastedImage_0.png

Thanks,

Manivel R

Reply
0 Kudos
TheBobkin
Champion
Champion

Hello Manivel,

Interesting, there are 33 DOM_OBJECTs listed there (30 seen in RVC)  - can you post the output from any node of:

# cmmds-tool find -t DOM_OBJECT -f json

Bob

Reply
0 Kudos
ManivelR
Hot Shot
Hot Shot

Hi Bob,

Please find the output of DOM objects.

Thanks,

Manivel RR

Reply
0 Kudos
ManivelR
Hot Shot
Hot Shot

Hi Bob,

Its suprising.In RVC,it is showing different values.I logged a case with VMware on last Monday and waiting response.

type: vsan

url: ds:///vmfs/volumes/vsan:523a90f10336da7b-df2f275d8f869fd4/

multipleHostAccess: true

capacity: 73719.24GB

free space: 66769.88GB

Thanks,

Manivel RR

Reply
0 Kudos
ManivelR
Hot Shot
Hot Shot

Hi Bob,

I got from VMware support engineer update.

Firstly informed you this is a known issue. Our version is vsphere 6.7.0 update 1

Basically what is happening is; for some reason when the vSAN detects is barely use or empty it shows a mismatch in the space used. Could shown (Like in your case) that is using a lot of space in the datastore even if it is empty or with just 6-7 VM's.

But that does not mean that you have actually an issue, because you do have all the space available is just the webclient showing the wrong information.

The solution is just to keep filling the space in the cluster and eventually the utilization graph will start to level out and reflect the percentages more accurately.

Thanks,

Manivel R

Reply
0 Kudos