VMware Cloud Community
itcomms
Contributor
Contributor

How Many VMs

Hi,

We are running Dell Poweredge R410 Servers with Dual Hex Xeron L5640 CPU'S so this shows a total Capacity of 12 x 2.26GHz within Vmware

The CPUs feature 6 Cores each with 6 Hyper Threads so VMware reports 24 Logical Processors

Running Raid 10 (4x2TB Disks) with 32GB of Ram,

if we had a only of 1 CPU Core, 2GB Ram and 100GB Disk Space, what would be the max number of vm's before a degrade in performance?

0 Kudos
8 Replies
itcomms
Contributor
Contributor

Sorry i meant to say,

if we had a policy of each VM with 1 vCPU Core, 2GB Ram and 100GB HD, how many vm's max can the server support before a degrade in  performance?

0 Kudos
a_p_
Leadership
Leadership

Hard to say without knowing the guest operating systems' requirements regarding CPU, memory, disk I/O and throughput. With the 2TB disks you mentioned - which I assume are S-ATA disks - I'd guess that the limiting factor will be disk throughput. Disk capacity is easy to calculate (keep in mind that there's some overhead) and you can run up to about 30-35 VMs. For memory subtract ~4GB for the ESXi host, so without overcommitting memory you can run at least 14VMs. The CPU performance will most likely be sufficient for the number of VMs you can run with the other limitations I mentioned.

André

0 Kudos
DavidPasek
Enthusiast
Enthusiast

Agree with Andre.

DIsk IO will be probably the most limiting factor. As we don't know your expected workload we can do just estimations.

YOu have 4 sata disks in R10. So you have around 320 IOPS available for all VMs. These amount IOPS are available just for 100% read. Each write IO will consume two IOs because of R10 write penalty. You can host around 10 VMs In case your typical VM consumes 30 IOPS in average.

YOu have 32GB RAM which is another limiting factor. As Andre already mentioned you have to remove ~4GB consumed by hypervisor kernel and other processes. As you want 2GB per VM it means 14 VM will be max but it can be better if VMs don't need full 2GB and ballooning is leveraged.

YOu have 12 CPU cores/24 CPU threads. Again, it depends on real workload demand. Typical rule of thumb is 3 vCPU per CPU thread which means upto 72 VMs having 1vCPU. But there are other aspects like CPU RDY and CPU Co-stop indicating performance issues.

HOpe this helps.

-- The devil is in the detail.
0 Kudos
itcomms
Contributor
Contributor

Hi,

the VM's will be running either Windows 7 / 8 PRO with a VoIP Application.

so if we budget for 10-12 Vms per Host, we should be ok? Installing VCentre to hopefully give me a better overview of the usage etc.

additional memory for the host of say 64gb is not an issue.

attached is a live system with 6 VMS

0 Kudos
DavidPasek
Enthusiast
Enthusiast

I don't think anybody on this is forum can help you with absolutely precise hardware sizing of your production infrastructure.

That's your business 🙂

VoIP Applications are usually very sensitive for latencies. But it really depends on particular VoIP application and expected load.

When I worked for CISCO Advanced Services we had strict infrastructure requirements to run Unified Communication Software components on top of virtual infrastructure.

Sorry to give you such general answer.

-- The devil is in the detail.
0 Kudos
weinstein5
Immortal
Immortal

As others have indicated no one can give you a precise answer but in rough terms with a lightly utilized VMs you can have 8-10 vCPUs per core/HT - so 10 12 VMs per host will definitely work -

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful
0 Kudos
itcomms
Contributor
Contributor

following the comments here, we have now upgraded the Hosts to 64GB Ram and setup HP SATA SAN Storage connected via Fiber Channel Ethernet. hopefully this will improve the IO of disk access.

just waiting for the cables to be connected to the servers, however using the Vmware I/O Analyzer on our standard Raid 10

the value i got over a 120 second test was 58185.13 / 100 - 581.72 Disk I/Os per second, will post the results once on the SAN

with 64GB of Ram and increasing the Ram per Vm to 3GB and increase the max I/O per VM to 40 per sec

this means a max of 14 VMs. 20 if the San can deliver more then 581 I/O's per sec?

0 Kudos
AlexanderGorshu
Contributor
Contributor

What type of disk will you use? Thin or thick?

I think that for production environment you should use thick disk for you VMs.

Raid 10 (4x2TB Disks) - that is a 2Tb of free space due to Raid 10.

If you want to take 3Gb RAM for each of your VMs your SWAP file will be 3GB for Each VMs.

Will you use snapshots? It need a storage space too.

Example:

Time you want to keep your snapshots influence on your vmfs storage space for each VMs:

Little time of keeping: you need add 10% of your VMs disk space extra

Medium time of keeping: you need add 30% of your VMs disk space extra

Long time of keeping: you need add 200% of your VMs disk space extra

and etc.

If I understand your question incorrect ... sorry

0 Kudos