In the Deployment and Configuration Guide it recommends 800GB of storage for the Analytics VM and 100GB for the UI VM for a small deployment.
These capacity figures seem extremely high.
Has anyone that has deployed this in production come anywhere near using this amount of storage.
We have an environment with 17 ESXi hosts and approx 400 VMs and I am trying to avoid wasting space where possible (isn't everyone).
In a design document from VMware Professional services I came across the following formula:
(#Metrics x 12 collections per hour x 4320 hours x 16 bytes)/(1024^3) = Storage (GB)
Based on the following assumptions:
Metric collection frequency (every 5 minutes)
Metric retention period (default of 6 months)
Metric storage requirement (16 bytes per metric)
If I plug in 600,000 for #Metrics (Deployment and Configuration Guide - small deployment) I get a storage total of:
600,000 x 12 x 4320 x 16 = 28GB
Can anyone tell me if this looks correct and can you recommned (based on experience) a good starting size for the VM's (assuming I can always increase disk space later on).
When I run that calculation I get 497GB for the 1500 VM configuration. I have seen customers who used the small configuration but had 1000 VMs run into disk storage issues - so I wouldn't recommend cutting it close.
For your config, the data storage for the metrics comes out to 120GB. But I also note you're running about 23:1 vm:host consolidation where the formula makes assumptions about a "typical" environment (i.e. 10:1 ratio). Also keep in mind that having a high number of data stores could contribute to larger data requirements.
My lab has been running for over 6 months now, and while it's not the best benchmark in the world, my storage requirements for metrics come out to around 2.5GB but my actual storage consumption is 6.5GB for the /data partition.