VMware Cloud Community
JDMils_Interact
Enthusiast
Enthusiast
Jump to solution

Expand vSAN datastore

I have three hosts and they are setup as below. My vSAN datastore is showing 10GB free so I increased the size of the LUNs presented to the hosts and am hoping that the vSAN datastore increases as well, but it hasn't so far and I cannot figger out why not?

Host01 (VMware ESXi, 6.5.0, 9298722)

-- mpx.vmhba1:C0:T0:L0,disk,10.00 GB,HDD (Boot disk)

-- mpx.vmhba1:C0:T3:L0,disk,40.00 GB,Attached,HDD (was 20GB and I increased it to 40GB)

-- mpx.vmhba1:C0:T2:L0,disk,20.00 GB,Attached,HDD

-- mpx.vmhba1:C0:T1:L0,disk,5.00 GB,Attached,Flash (cache disk)

Host02 (VMware ESXi, 6.5.0, 9298722)

-- mpx.vmhba1:C0:T0:L0,disk,10.00 GB,HDD (Boot disk)

-- mpx.vmhba1:C0:T3:L0,disk,40.00 GB,Attached,HDD (was 20GB and I increased it to 40GB)

-- mpx.vmhba1:C0:T2:L0,disk,20.00 GB,Attached,HDD

-- mpx.vmhba1:C0:T1:L0,disk,5.00 GB,Attached,Flash (cache disk)

Host03 (VMware ESXi, 6.5.0, 9298722)

-- mpx.vmhba1:C0:T0:L0,disk,10.00 GB,HDD (Boot disk)

-- mpx.vmhba1:C0:T3:L0,disk,40.00 GB,Attached,HDD (was 20GB and I increased it to 40GB)

-- mpx.vmhba1:C0:T2:L0,disk,20.00 GB,Attached,HDD

-- mpx.vmhba1:C0:T1:L0,disk,5.00 GB,Attached,Flash (cache disk)

vSAN Disk Management shows the disks and the correct sizing:

Disk Model/Serial Number,Disk Tier,Total Capacity,vSAN Health Status,Disk Distribution/Host

"VMware   Virtual disk    , 5.00 GB disks",Cache,15.00 GB,Healthy,1 disk on 3 hosts

                 Local VMware Disk (mpx.vmhba0:C0:T1:L0),Cache,5.00 GB,Healthy,Host03

                 Local VMware Disk (mpx.vmhba0:C0:T1:L0),Cache,5.00 GB,Healthy,Host02

                 Local VMware Disk (mpx.vmhba1:C0:T1:L0),Cache,5.00 GB,Healthy,Host01

"VMware   Virtual disk    , 40.00 GB disks",Capacity,120.00 GB,Healthy,1 disk on 3 hosts

                 Local VMware Disk (mpx.vmhba0:C0:T3:L0),Capacity,40.00 GB,Healthy,Host03

                 Local VMware Disk (mpx.vmhba0:C0:T3:L0),Capacity,40.00 GB,Healthy,Host02

                 Local VMware Disk (mpx.vmhba1:C0:T3:L0),Capacity,40.00 GB,Healthy,Host01

"VMware   Virtual disk    , 20.00 GB disks",Capacity,60.00 GB,Healthy,1 disk on 3 hosts

                 Local VMware Disk (mpx.vmhba0:C0:T2:L0),Capacity,20.00 GB,Healthy,Host03

                 Local VMware Disk (mpx.vmhba0:C0:T2:L0),Capacity,20.00 GB,Healthy,Host02

                 Local VMware Disk (mpx.vmhba1:C0:T2:L0),Capacity,20.00 GB,Healthy,Host01

However the vSAN disk capacity has not changed:

Capacity: 82.45GB

Used: 66.41GB

Free: 16.04GB

If I understand correctly, my vSAN datstore should have the sum of all local disks on all hosts which should now equate to 15GB + 120GB + 60GB = 195GB? How do I get the rest of the disk space to show in the vSAN?

1 Solution

Accepted Solutions
TheBobkin
Champion
Champion
Jump to solution

Hello JDMils_Interactive

Welcome to Communities.

"If I understand correctly, my vSAN datstore should have the sum of all local disks on all hosts which should now equate to 15GB + 120GB + 60GB = 195GB? "

Cache-tier doesn't add to capacity so -15GB, then you are still missing ~1/3rd of your capacity so probably one of your hosts (or some disks) are not contributing - check that the cluster is fully-formed, no hosts are in Maintenance-Mode (nor in vSAN Decom state 6) and all disks are in CMMDS (start with #localcli vsan cluster get).

"How do I get the rest of the disk space to show in the vSAN?"

Interestingly if you expand a device in a nested vSAN cluster (testing in VMware Workstation here) it will show the increased size in Disk Management but it doesn't increase the size of the vsanDatastore - this obviously wouldn't apply to a real cluster due to whole unpartitioned devices having to be available for vSAN use and the fact that real physical devices can't be expanded.

To use the extended space all you have to do is remove the re-sized disks ('Evacuate all data' or 'Ensure data accessibility') and add them back to the disk-groups.

Bob

View solution in original post

5 Replies
vpradeep01
VMware Employee
VMware Employee
Jump to solution

Hello JD JDMils_Interactive

My vSAN datastore is showing 10GB free so I increased the size of the LUNs presented to the hosts and am hoping that the vSAN datastore increases as well.

This will not work.  Once the disk is claimed for the vSAN under Disk management further expansion of the HD may not work. I believe the virsto FS partition may not allow such for the drive in capacity tier

I assume this is your nested environment. If yes, can you chose to add a new LUN/HD instead.

TheBobkin
Champion
Champion
Jump to solution

Hello JDMils_Interactive

Welcome to Communities.

"If I understand correctly, my vSAN datstore should have the sum of all local disks on all hosts which should now equate to 15GB + 120GB + 60GB = 195GB? "

Cache-tier doesn't add to capacity so -15GB, then you are still missing ~1/3rd of your capacity so probably one of your hosts (or some disks) are not contributing - check that the cluster is fully-formed, no hosts are in Maintenance-Mode (nor in vSAN Decom state 6) and all disks are in CMMDS (start with #localcli vsan cluster get).

"How do I get the rest of the disk space to show in the vSAN?"

Interestingly if you expand a device in a nested vSAN cluster (testing in VMware Workstation here) it will show the increased size in Disk Management but it doesn't increase the size of the vsanDatastore - this obviously wouldn't apply to a real cluster due to whole unpartitioned devices having to be available for vSAN use and the fact that real physical devices can't be expanded.

To use the extended space all you have to do is remove the re-sized disks ('Evacuate all data' or 'Ensure data accessibility') and add them back to the disk-groups.

Bob

JDMils_Interact
Enthusiast
Enthusiast
Jump to solution

Hi,

You guys are right, I'm running a test environment at home on my Brix GB-Bri5H-8250. I've setup 3 nested hosts and it's not the fastest infrastructure base, but with 32GB RAM & 1TB HDD, it's an economical test bed. All hosts are out of maintenance mode and working OK.

Thanks for the help, I will evacuate the disks from the vSAN cluster one-at-a-time and re-add them back in. I thus assume that the individual disks from each host are treated like RAIDed storage and when one is removed, the others take the load. I've noticed that when I put one of the hosts in maintenance mode that it takes FOREVER, and I can see a lot of disk activity on the vSAN datastore and this is what must be happening there.

Thanks again.

0 Kudos
JDMils_Interact
Enthusiast
Enthusiast
Jump to solution

this obviously wouldn't apply to a real cluster due to whole unpartitioned devices having to be available for vSAN use and the fact that real physical devices can't be expanded.

I'm interested in this quote! What's the difference between a "real" cluster and a nested cluster? I would have thought they be the same thing as, how would the hosts in a nested environment know they are virtual just like the virtual Windows & Linux machines which we normally run on VMware?

With my previous-work environment, we only used fibre connected LUNs from an EMC SAN and the LUNs can easily be expanded and then in VMware we were able to expand the datastore using the underlying free space, but in my new job, they have vSAN in some clusters and I am trying to get up-to-speed with the setup via my home lab.

0 Kudos
TheBobkin
Champion
Champion
Jump to solution

Hello JDMils_Interactive,

"I've setup 3 nested hosts and it's not the fastest infrastructure base, but with 32GB RAM & 1TB HDD, it's an economical test bed"

Consider some (relatively small) SSD/M.2 NVMe devices - the difference between these and a HDD in a nested lab is ridiculous.

"All hosts are out of maintenance mode and working OK."

Please do check the vSAN decom state as it is technically feasible to get a host out of ESXi MM but not out of vSAN MM (esxcli vsan cluster get will tell you this info in later versions, otherwise use cmmds-tool NODE_DECOM_STATE output).

"I thus assume that the individual disks from each host are treated like RAIDed storage and when one is removed, the others take the load."

Yes, with default Storage Policy data-components are stored as RAID1 with 2 copies of the data residing on separate Fault Domains (hosts here) - thus if you remove a disk with 'Ensure Data Accessibility' option it basically just checks that the other copy of what is on that disk is available elsewhere(and current) and just runs off the single data-replica until you rebuild the second copy of the data. This will function like this for all disks on one host but when you start trying to do the same actions on the next host where at least some of the data is the only copy still available it will have to rebuild (the new only remaining copy) of the data somewhere else before it can remove the disk. When you remove a disk with 'Full Data Evacuation' it doesn't just run off one copy of the data, it actually starts cloning a 3rd copy of the data (assuming FTT=1) to replace the 2nd copy of data that you are about to decommission.

Similar goes for placing hosts in MM with the different available options.

"I'm interested in this quote! What's the difference between a "real" cluster and a nested cluster?"

Basically there are things that you can do in a nested lab environment that wouldn't be possible in real clusters (such as extending the size of a physical disk) and thus can result in disparities in what the GUI will tell you, as these are not possible in the real world ESXi/vCenter is not coded to expect them.

"I would have thought they be the same thing as, how would the hosts in a nested environment know they are virtual just like the virtual Windows & Linux machines which we normally run on VMware?"

They are directly exposed to the underlying hardware, e.g. your nested ESXi knows that its processor is whatever is in your physical box, same goes for (non-RAID) disks.

"With my previous-work environment, we only used fibre connected LUNs from an EMC SAN and the LUNs can easily be expanded and then in VMware we were able to expand the datastore using the underlying free space"

Sure, you can carve more space for a LUN but these are not backed by single devices but a RAID of many devices - vSAN cache and capacity-tier devices are only to be presented as single entire devices which then cumulatively make the vsanDatastore.

"but in my new job, they have vSAN in some clusters and I am trying to get up-to-speed with the setup via my home lab."

If your homelab is slow and/or you just want to get frisky making and breaking stuff then HOL is the best place for this in my opinion.

Bob