Hi Folks, hoping someone can answer this simple question as I have gained some great recommendations from this community which has allowed me to really tweak our VCOPS environment.
Basically I have a couple of VM's that have several TB of storage used. The other 100 or so VM's average about 10GB storage usage. Storage is metered by demand not allocation. These two VM's are massively skewing my average VM profile so the average disk space usage is 150GB. How can I exclude these two VM's from my average to make the capacity planning more realistic? Ideally I would like to continue monitoring all other metrics (oversize/undersize etc) for these two large VM's.
maybe you could create a new Group in your env and add those vm to that group.
After that, create a new policy, create the settings you need and attache that policy to the group.
I'm no sure this is possible, we're not talking about an intelligent capacity factor, this is just a straight average VM size. An average is simply an average.. it'll always be total disk space / # VMs. Now what you could look at instead is the distribution of the populating in small / medium / large classification. Some views leverage this concept to address the different configs of the population. Unfortunately, this is different than the "average vm size" that is applied to many views. I'm thinking you want to adjust this to correct to # VMs remaining views/reports, right? I would say that at least this is calculated based on your view, in that if you look at a part of your environment that excludes those large VMs, your avg vm size will be what you want it to be. Avg VM aside, you still have the capacity remaining #s and the what-if scenarios to forecast out instead of relying on "avg vm size".
Yes its the VM remaining views that Im trying to correct as they aren't representative of our true capacity. Ideally we want the big number we see on the dashboard for our world to be true otherwise the dashboard is pretty worthless.