I need to create a sheet which contains a collection of specific characteristics concerning the maximum number of I/O's or the maximum number of vCPUs for a virtual machine... until the performance decreases. I know you have to analyze any system in detail, but I need a raw collection of values because we have a huge number of systems which are probably ready to be virtualized.
Back to topic: For example ... a native dual core system has usually a higher performance than a single core system (if the applications are using the second CPU). At least you can generally say that a dual core system is certainly not slower than a single core system.
But in a virtual environment, the 2nd or 3th, 4th, ... vCPU has to be simulated and causes a specific loss of performance. So you can't generally say that if a customer wants a dual core machine ... he gets a faster machine in comparison with a single core system. But also if the application can handle two or more CPU's it might be better if the VM gets several vCPU's.
What do you think is the peak value of vCPU's ... so that there neither a loss or gain of performance. Background: There are machines with up to 8 physical CPU's ... would it be a good idea to convert them into an eight-vCPU-machine?
As you remember I need to create a list concerning the peak values for I/Os, vCPU, MEM, ... according to the example, I mentioned before.
Thank you in advance for your help.
kind regards technical_man
Here are some of the VM recommendations that I work with, maybe this will help you.
-Recommended limit of 16 ESX servers per VMFS volume, based on limitations of a VirtualCenter - Managed ESX Setup
-Recommended maximum of 32 IO-intensive VM's sharing a VMFS Volume
-Up to 100 non-IO-Intensive VM's can share a single VMFS volume with acceptable performance
-No more than 255 files per VMFS partition
-Up to 2TB limit per physical extent of a VMFS volume
Another general rule is, to allocate four virtual processors per physical processor. ESX does have a hard coded limit of 80 virtual processors that may be assigned within any single host. And with larger systems such as 8 or 16 way hosts, these limits should be considered when in the design process.
thanks a lot!
your general rule about the number of virtual processors per physical CPU's is new for me. So you think, it's not important whether you configure 4 VM's on a physical CPU or you build just one VM but with a quad core cpu?
I create a specific VMFS volume for any new virtual machine so I don't have to consider with the simultaneous access of several VM's on a single VMFS volume. I'm rather interested in the SAN performance at all.
kind regards, the technical man.
Here is a descent doc on some configuration maximums for vms, esx hosts and VC: http://www.vmware.com/pdf/vi3_301_201_config_max.pdf
I ma not sure where the 4 vms per physical processor comes from but the general rule is more like 4-6 per core. In our dev area we actually are up around 8 per core; it all depends on load.
I would not create a VMFS volume per VM. Like memory and CPU you create a bigger pool and share it between your hosts. There are not many good reasons to have dedicated VMFS stores.