Hi all,
I noticed something strange: When you create new VMs in ESX 3.5 update1, they get their resources set to normal, with greyed out values for CPU=4000 (for 1,2 or 4 CPUs the same!) and for memory=655360 (appears to be 10 times the max amount of memory)
Now in ESX 3.5 update2, it is VERY different:
1vCPU: shares=1000
2vCPU: shares=2000
4vCPU: shares=4000
and the memory share is 10 times the number of configured Mbytes.
I find this very confusing. The change in the vCPU priority I get, it used to be the same in ESX 3.0. I figured the shares are corrected under water for ESX 3.5, allthough now with update2 it seem to have changed back.
Memory is even weirder: A machine with more memory configured gets more shares. Kind of logical, when you look at a VM with 64Mb and one with 64Gb... If both VMs need more memory, the one with 64Mb is probably happy with 10Mb, whil the big one wants for example 10Gb... But where is the official explantion. Even worse, is it now a best practice to set these memory options? Also in ESX 3.0 / ESX3.5 update1 or not?
And also: Are the shares now all of a sudden back to the original, or was this a bug in update1? Meaning, do I have to tell customers to set 2 and 4 vCPU-VMs to twice or quadruple the shares in update1 or not? And what about upgrading from update1 to update2?
All very confusing. Anyone know more about this?
They have actually corrected a bug that has been in VI-3 since 3.0 - this behavior goes back to the way it functioned in ESX 2.x - for the preset levels of low, normal, high cpu shares will be set to 500, 1000, 2000 per cpu respectively - similarly for memory it will be 5 x # MB for low, 10 x # MB for normal and 20 x # MB for high -
Hi,
Thanks for your explanation.... I noticed it DOES work that way.... but what about pre-update2 installs? Basically it would be best practice for all pre-update2 installs to set all resources to manual and manually feed these shares (depending on number of vCPUs and amount of memory) to each vm?
