Vsan 6.5 configuration ( this is environment for validation):
3 Esxi 64GB Ram each. 3 all flash disk group:
Group 1:
- 372,61 GB cache drive
- 1,82 TB capacity drive
- 1,82 TB capacity drive
Group 2:
- 1,82 TB cahce drive
- 419,19 GB capacity drive
- 419,19 GB capacity drive
Group 3:
- 372,61 GB cache drive
- 349,32 GB capacity drive
- 349,32 GB capacity drive
Which gives me 5,09 TB vsanDatastore
I am adding New hard Disk to VM ( i have only one VM).
Size is 1 TB.
Virtual SAN storage consumption: 2TB disk size on datastore
VM storage policy: Virtual SAN Default Storage Policy ( number of disk stripes per object : 2 )
Location is vsanDatastore.
Disk Provisioning: Thick, eager zeroed
Disk Mode: Independent Persistent
When i try to add it i am getting Out of resources. Why, I thought having 5,09 TB datastore will be enough?
I am new to vsan so any help is appreciated
Yes, as the error says you need to add more capacity on nodes
2017-08-17T12:13:56.707Z 72374 CLOM_Diagnose: There are currently 4 usable disks for the operation. This operation requires 3 more disks with at least 219906310144 bytes free s
Unlike, normal storage allocation. VSAN divides the object into multiple components and each component can grow upto 255GB and underlying magnetic disk should contain enough space to accomodate the component of 255GB, if your HDD has less than 255GB free space then you will not be able to create the disk and it will throw error, total free space doesn't matter.
In your case, you are trying to create 1TB disk which can be split into approx 4*255 component, so you need min 4 HDD with free space of 255GB. In case if you do not have HDD of 255GB free space it will throw error
To find the actual usage you can run rvc tools to find the disk size and usage
for commands refer the guide
you may use this command to fetch the details
vsan.disks_stats -h
The smallest drive I have is 349,32 GB so it should be enough. All drives are empty.
Is there some option i need to turn on (is not turn on on default) ?
Nothing I can think of as such, can you check the clomd log , what causes this error
login to ESXi host and check /var/log/clomd.log
Yep, i found:
2017-08-17T12:13:56.707Z 72374 Total nodes:3 Total disks:6 disksUsable:7 phyDisksUsable:4 disksNeeded:10 disks decommisionning:0 disks with Node decommisionning:0 unhealthy dis
2017-08-17T12:13:56.707Z 72374 Total fds:3 fdsUsable:0 fdsNeeded:0 capacityUsable: 0 capacityNeeded: 0
2017-08-17T12:13:56.707Z 72374 CLOM_Diagnose: There are currently 4 usable disks for the operation. This operation requires 3 more disks with at least 219906310144 bytes free s
2017-08-17T12:13:56.707Z 72374 Remaining 2 disks not usable because:
2017-08-17T12:13:56.707Z 72374 2 - Insufficient space for data/cache reservation.
2017-08-17T12:13:56.707Z 72374 0 - Maintenance mode or unhealthy disks.
2017-08-17T12:13:56.707Z 72374 0 - Disk-version or storage-type mismatch.
2017-08-17T12:13:56.707Z 72374 0 - Max component count reached.
2017-08-17T12:13:56.707Z 72374 0 - In unusable fault-domains due to policy constraints.
2017-08-17T12:13:56.707Z 72374 0 - In witness node.
2017-08-17T12:13:56.707Z 72374 Failed to generate configuration: Underlying device has no free space
Is it mean that disk group is not big enough?
Hi Kendzi87,
You state in your post that the VM is configured with the Virtual SAN Default Storage Policy (which is FTT=1 RAID1, Stripe Width =1, Object Space Reservation = 0%). However, you also say that you have a stripe width of 2 on the VM. Did you modify the Default Policy to add the stripe width?
When a stripe width is manually specified, vSAN must put the stripes on different physical drives to provide the performance you are looking for in the policy. As the maximum component size is 255GB, your 1TB disk object must be divided into smaller chunks as Sureshkumar mentioned. 4x255GB is not quite 1TB, so you would actually need more, smaller components. But with the addition of your defined stripe width policy, we would need to take 1/2 of those components and ensure that they are split on different physical drives. There will be another complete mirror set on another node, and then finally the witness components will need to go another node.
To satisfy the Failure to Tolerate in a 3-node cluster, one full mirror set must be contained in one fault domain (i.e. host), the second mirror set on a different fault domain, and the witness components on the 3rd fault domain.
Taking your biggest Disk group (Group 1), you have enough capacity to create the 512GB of components (Stripe1 of your 1TB drive) on one of the capacity drives, and the other drive will accommodate the other stripe, leaving you roughly 1.3TB of capacity free on each drive. However, the 2nd mirror set required to give you your FTT=1, must now be created on another host. Given your next biggest host/diskgroup only has 419.19 GB capacity disks, there is now nowhere large enough on the disk group to create the required 2x512GB components sets.
I hope that makes sense.
In fact, regardless of the defined stripe width, neither of the remaining 2 nodes have sufficient capacity to create the components.
This is one of the reasons we highly recommend balanced disk groups - i.e. each host should have disk groups of the same size backed by the same size cache and capacity devices.
Yes, as the error says you need to add more capacity on nodes
2017-08-17T12:13:56.707Z 72374 CLOM_Diagnose: There are currently 4 usable disks for the operation. This operation requires 3 more disks with at least 219906310144 bytes free s
jameseydoyle yes i modify default policy. But i see that for my purpose it is not necessary.
Sureshkumar M I modify my disk group and environment and now disk is added, thanks for help