TheBobkin's Posts

@Gaprofitt17, Perhaps you are confusing this with deduplication? Because that is true for deduplication, but not true for encryption.
@CarlPower, Latest Witness is 7.0 U3l, available here: https://customerconnect.vmware.com/downloads/details?downloadGroup=VC70U3O&productId=974#drivers_tools   Witnesses don't get a new package for... See more...
@CarlPower, Latest Witness is 7.0 U3l, available here: https://customerconnect.vmware.com/downloads/details?downloadGroup=VC70U3O&productId=974#drivers_tools   Witnesses don't get a new package for every ESXi release so will need to patch that with a bare VMware ESXi patch to get it later than that.
@jynx_km What you referenced there is a component, not an object - did you delete object with UUID 24381365-729a-538f-f3cf-e43d1a2ad266?   If yes, then this gone and you need to restore from backups.
@jynx_km, Is this a homelab? If not then why are you using unsupported consumer-grade disks?   Can you cat the file VMxxxxx.vmdk to indicate what object is backing the base-disk?   What was the U... See more...
@jynx_km, Is this a homelab? If not then why are you using unsupported consumer-grade disks?   Can you cat the file VMxxxxx.vmdk to indicate what object is backing the base-disk?   What was the UUID of the object you deleted and why did you delete that?
@ca439625 What is the model(s) of SSD you have in these servers?
Maintained version will be available here going forward: https://github.com/vmware-labs/hci-benchmark-appliance
@Peymansh, Why are you using a consumer-grade device (Samsung 980 Pro) that is not on the vSAN HCL (for any vSAN usage type) as cache-tier device here?   This is beyond unsupported, you are potenti... See more...
@Peymansh, Why are you using a consumer-grade device (Samsung 980 Pro) that is not on the vSAN HCL (for any vSAN usage type) as cache-tier device here?   This is beyond unsupported, you are potentially looking at data-loss here.   You should replace all unsupported devices here with devices that will actually be expected to work for the intended purpose here and restore any lost data from backup.
@efpbe, Are you sure that is the 'vSAN Default Storage Policy' applied to that object?(I think it is possible to still have multiple of these in SPBM inventory)   Asking as that object does not hav... See more...
@efpbe, Are you sure that is the 'vSAN Default Storage Policy' applied to that object?(I think it is possible to still have multiple of these in SPBM inventory)   Asking as that object does not have 'No preference' rule on it - you should change it to 'No preference', apply it to all objects and validate this (from 'esxli vsan debug object list --all' output) has been applied to everything (not just what is registered in inventory!) that has 'Dedup&Compression' rule, then you should be able to reformat to Compression only (and change policy to that also, but not mandatory).
@lElOUCHE_79, Objects  (e.g. .vmdk, .vswp, namespaces) stored on vsanDatastore with default FTT=1, RAID1 policy are basically stored as two complete copies (e.g. so that one can be unavailable but th... See more...
@lElOUCHE_79, Objects  (e.g. .vmdk, .vswp, namespaces) stored on vsanDatastore with default FTT=1, RAID1 policy are basically stored as two complete copies (e.g. so that one can be unavailable but the data still accessible from the other replica).    Sorry but I am unsure what you mean regarding source etc. - are you referring to what happens when migrating objects to vsanDatastore?   This article outlines the basic concepts well: https://core.vmware.com/blog/vsan-objects-and-components-revisited
@efpbe, Can you please share the layout/policy of this object '5047d264-da13-bbf6-bb93-84160c080880'? Maybe it has a dedupe&compression-only policy: esxcli vsan debug object list -u 5047d264-da13-bb... See more...
@efpbe, Can you please share the layout/policy of this object '5047d264-da13-bbf6-bb93-84160c080880'? Maybe it has a dedupe&compression-only policy: esxcli vsan debug object list -u 5047d264-da13-bbf6-bb93-84160c080880   2023-10-07T08:26:57.657Z No(29) clomd[38181686]: [Originator@6876] CLOMWhatIfRecordResourceAndReplicas: targetPolicy: UFT1/LFT0, capacity-pref=0, currentPolicy: UFT1/LFT0, capacity-pref=0, item.origObjectAddressSpace=10700718080, targetAddress(totalComponentAddressSpace)=10700718080 2023-10-07T08:26:57.658Z No(29) clomd[38181686]: [Originator@6876] CLOM_CheckClusterResourcesForPolicy: Target datastore space efficiency policy is not compatible with current cluster configuration. DedupAndCompression:1, CompressionOnly:0, None:0 2023-10-07T08:26:57.658Z No(29) clomd[38181686]: [Originator@6876] CLOMWhatIfObjectDecom: Couldn't check cluster resources: Failure 2023-10-07T08:26:57.658Z No(29) clomd[38181686]: [Originator@6876] CLOMWhatIfRunDecom: Failed to decommission object 5047d264-da13-bbf6-bb93-84160c080880: Failure, doDeltaDecom = 0.  
@MikeSmarz If it is a 3-node cluster and all nodes have Disk-Groups/Storage-Pools then that is sufficient for creating RAID1,FTT=1 objects already and doesn't require any additional node or Witness.
@AjayNTalakad "Step 3 rename other 3 HDDs as 2nd HDD as datastore02, 3rd HDD as datastore03, 3rd HDD as datastore04" - what do you mean by "rename"? Are you claiming these as VMFS datastores? If yes,... See more...
@AjayNTalakad "Step 3 rename other 3 HDDs as 2nd HDD as datastore02, 3rd HDD as datastore03, 3rd HDD as datastore04" - what do you mean by "rename"? Are you claiming these as VMFS datastores? If yes, then don't do that, vSAN cannot claim any disk that already has partitions on them (be they VMFS or any other partition type). Deal with the RAID5 on controller part first, lookup the controller model on the vSAN HCL to confirm whether they should be as passthrough devices or individual (e.g. 1 physical device per VD) RAID0 VDs.
"1st server 6TBX4 done hardware RAID-5 configuration = got space 16.25TB (installed ESXi 7.0 Update 3)" @AjayNTalakad, never RAID multiple devices on controller side and present this as 1 physical d... See more...
"1st server 6TBX4 done hardware RAID-5 configuration = got space 16.25TB (installed ESXi 7.0 Update 3)" @AjayNTalakad, never RAID multiple devices on controller side and present this as 1 physical disk to be used for vSAN - this is completely unsupported and will at some point result in data-loss if used like this, reconfigure these as individual devices immediately (whether as Passthrough or individual RAID0 Virtual Devices, depends on which controller is in use and what vSAN HCL states should be the disk access mode for it).
@IamTHEvilONE, If this is just for a homelab/test and you aren't concerned about the devices being on the HCL, performance testing etc.,  then drive size or anything like that really doesn't matter -... See more...
@IamTHEvilONE, If this is just for a homelab/test and you aren't concerned about the devices being on the HCL, performance testing etc.,  then drive size or anything like that really doesn't matter - if you just want to test/check basic things then you don't need to use multiple physical servers but can instead use a nested ESXi setup (e.g. ESXi running as VMs on any hypervisor or even on a desktop on VMware WorkStation) but for even less effort (assuming don't need specific OSes/images/internet access) then you can just spin one up on HOL: https://labs.hol.vmware.com/HOL/catalogs/
@einstein-a-go-g This parameter gets set to 1 (enabled) as part of vSAN cluster shutdown feature initiated from vCenter - for this to be enabled on nodes basically means this must have been run at so... See more...
@einstein-a-go-g This parameter gets set to 1 (enabled) as part of vSAN cluster shutdown feature initiated from vCenter - for this to be enabled on nodes basically means this must have been run at some point and not undone/undone correctly/fully,   All your objects are healthy and cluster functioning normally now, yes?
@einstein-a-go-g DOMPauseAllCCPs should be set to 0 on all nodes, set this on any that are set to 1:   # vsish -e set /config/VSAN/intOpts/DOMPauseAllCCPs 0   After doing that, my bet is everythi... See more...
@einstein-a-go-g DOMPauseAllCCPs should be set to 0 on all nodes, set this on any that are set to 1:   # vsish -e set /config/VSAN/intOpts/DOMPauseAllCCPs 0   After doing that, my bet is everything will be fine here. The Stats primary election state is likely due to the vSAN performance stats objects being one of the impacted objects, this should also clear once above is done.  
@TIRJO, If you have a 150GB vmdk stored as FTT=1,RAID1 and fill it completely within the Guest-OS then it will consume 300GB on vsanDatastore.   If you then delete ALL of the data within the Guest-... See more...
@TIRJO, If you have a 150GB vmdk stored as FTT=1,RAID1 and fill it completely within the Guest-OS then it will consume 300GB on vsanDatastore.   If you then delete ALL of the data within the Guest-OS it will still use 300GB on vsanDatastore (but you can then fill/empty this any way you like, it obviously can't grow further than that 300GB usage).   This is because vSAN isn't natively aware of the Guest-OS no longer needing these blocks unless you 1. Are on a version that can leverage vSAN TRIM/Unmap (added in 6.7 U1, so looks like nope unless you update which you should really anyway), 2. Enable vSAN TRIM/Unmap on the cluster, 3. Power-cycle any VMs that you want to use this feature and then 4. Run Guest-OS specific TRIM/Unmap function within the VM (e.g. fstrim on Linux OS).   If for whatever reason you cannot/don't want to upgrade to a non-ancient version of ESXi+vSAN and enable and run the above then you can of course do anything else that would reduce the size of the VM (temporarily) e.g. SvMotion it off vsanDatastore and back or move the data to a new vmdk and delete the original.
@ANANDA_M, note that that that health check being triggered (https://kb.vmware.com/s/article/2108743) isn't an "error" - it is an informational alert informing you what the clusters storage utilisati... See more...
@ANANDA_M, note that that that health check being triggered (https://kb.vmware.com/s/article/2108743) isn't an "error" - it is an informational alert informing you what the clusters storage utilisation and data-repair state would be if a node were to fail and be unfixable for a prolonged period.   Note that any VMs that are unregistered from vSphere inventory but not deleted still consume space on vsanDatastore - you should start by checking are there any/many large VMs that are not registered in inventory, are no longer needed and can be removed. Another low-hanging fruit for freeing space can be unconsolidated snapshots which have grown to large size from being left on running VMs for a prolonged period, these should be consolidated ('esxcli vsan debug object list --all' output is useful for identifying these).   Other than these, you should look at what unassociated objects you have in this cluster, but obviously only remove any that you are 100% sure are no longer needed (https://kb.vmware.com/s/article/70726)
@MJMVCIX, Those are referencing different ReadyNode performance profile builds e.g. AF-8, AF-6 and AF-4, so yes these are per host.
@miladmeh8 If you only tried via the vSphere UI Disk Management page then it might be disallowing remove due being unable to do remove precheck on it, if that is the case I would just remove the rema... See more...
@miladmeh8 If you only tried via the vSphere UI Disk Management page then it might be disallowing remove due being unable to do remove precheck on it, if that is the case I would just remove the remaining CMMDS reference to it via the CLI e.g.:   Get the UUID (e.g. will see all in-use disks naa/t10/eui reference but one entry with just a UUID in format 8-4-4-4-12 digits): # vdq -Hi # esxcli vsan storage remove -u UUIDhere