LordofVxRail
Enthusiast
Enthusiast

high vSAN Component

Hello All,

 

I'm having a rather strange Component utilisation issue that I can't quite understand.

my Component is over 70% and I have only 660 VM (within various vApps / catalogs)

I see my chain length can be high on some vms, but does that really impact the Component amount? 

I would expect to have thousands of vms running on my cluster (8 nodes  - 72000 Component  available total)

I use FTT1, default vSAN policy , that being said some of my VMs look very strange:

 

LordofVxRail_0-1620300747160.png

 

any thoughts? 

 

Labels (1)
0 Kudos
5 Replies
LordofVxRail
Enthusiast
Enthusiast

well, I ended up rebuilding all DG's, as well as enabling dedupe & compression, which took a LONG time as expected. 

the rebuilding of the DGs seemed to reduce their component usage considerably, but there is still a mountain of junk to clean out of vSAN.

I'm not sure what's at fault here, VCD not clean up up after itself / vSAN not cleaning up after itself / poor catalogue design or a combination of all 3.

chain length on vms is: was also high, so I'm consolidating all of those vms on a vapp per vapp basis, which is quite a painful process!  I'm kinda puzzled how the component limit got so completely out hand here

8 nodes, under 700 vms and nearly 58000 components at one stage? what a mess.

 

 

0 Kudos
sunvmware1
Enthusiast
Enthusiast

Hi,

Is there any issue or challenges you faced?

 

 

 

0 Kudos
LordofVxRail
Enthusiast
Enthusiast

Hi there, yes, there certainly was & currently are issues. What is unclear to me is how so many linked clones are left behind and not cleaned up once vApp was deleted via vcloud. I've exported a complete list of all objects / vmdks, and I can see thousands of old objects that are certainly no longer in use and will have to clean all those up manually.
My catalog design from day 1 was likely, not ideal; I believe all vms inside a vapp should be consolidated **before** adding to the catalog; otherwise, chain length can spiral out of control.

0 Kudos
LordofVxRail
Enthusiast
Enthusiast

ok, this is getting a bit strange now.

 

I have another cluster that is now exhibiting high component usage.

chain length is low across vms in vapps / catalog.

(there is a resync in progress)

LordofVxRail_0-1627302001122.png

..only 821 vms

 

LordofVxRail_1-1627302080148.png

component usage is extremely high. 

I am beginning the think vSAN does not clean up after itself when vapps are deleted. 

0 Kudos
LordofVxRail
Enthusiast
Enthusiast

consolidated chain length, tidied up all vsan policies, thus reducing component usage by over 25%

 

...still a mountain of garbage to manually clean up. 

0 Kudos