I'm having a rather strange Component utilisation issue that I can't quite understand.
my Component is over 70% and I have only 660 VM (within various vApps / catalogs)
I see my chain length can be high on some vms, but does that really impact the Component amount?
I would expect to have thousands of vms running on my cluster (8 nodes - 72000 Component available total)
I use FTT1, default vSAN policy , that being said some of my VMs look very strange:
well, I ended up rebuilding all DG's, as well as enabling dedupe & compression, which took a LONG time as expected.
the rebuilding of the DGs seemed to reduce their component usage considerably, but there is still a mountain of junk to clean out of vSAN.
I'm not sure what's at fault here, VCD not clean up up after itself / vSAN not cleaning up after itself / poor catalogue design or a combination of all 3.
chain length on vms is: was also high, so I'm consolidating all of those vms on a vapp per vapp basis, which is quite a painful process! I'm kinda puzzled how the component limit got so completely out hand here
8 nodes, under 700 vms and nearly 58000 components at one stage? what a mess.
Hi there, yes, there certainly was & currently are issues. What is unclear to me is how so many linked clones are left behind and not cleaned up once vApp was deleted via vcloud. I've exported a complete list of all objects / vmdks, and I can see thousands of old objects that are certainly no longer in use and will have to clean all those up manually.
My catalog design from day 1 was likely, not ideal; I believe all vms inside a vapp should be consolidated **before** adding to the catalog; otherwise, chain length can spiral out of control.
ok, this is getting a bit strange now.
I have another cluster that is now exhibiting high component usage.
chain length is low across vms in vapps / catalog.
(there is a resync in progress)
..only 821 vms
component usage is extremely high.
I am beginning the think vSAN does not clean up after itself when vapps are deleted.