VMware Cloud Community
ANorton
Enthusiast
Enthusiast

Hardware to VM matrix

Does anyone know if there is a document that can give you a rough idea on how many VM's you can fit on certain hardware configurations?

I am looking at purchasing new servers for hosts and with all the new processors with multiple cores and the amount of sockets per server I would be interested in getting peoples thoughts on what the best hardware configs are these days?

We run SQL and exchange as VM's just as an idea but some servers are low processing power such as random applciation servers etc..

Thanks

Aaron

Tags (1)
Reply
0 Kudos
3 Replies
eeg3
Commander
Commander

Have you taken a look at VMware Capacity Planner ?

Blog: http://blog.eeg3.net
Reply
0 Kudos
golddiggie
Champion
Champion

It all depends on what the servers will be doing... How large of an Exchange environment you have, how heavy the SQL server is being hit, how many of each type you have, how many servers you have that are hit heavy, and how old the current server hardware is. If you're running on four year old hardware (or older) and they're doing the job fairly well, then you'll be able to get a decent density on the vSphere host servers.

I prefer the Dell R710 as a good baseline server. If you can afford the R810 or R910 then you'll have a much higher VM to physical server density.

If you don't already have good shared storage (SAN) then you'll want to get that too. I've had great results with the EqualLogic iSCSI SAN's... You can pick ones that have high speed SAS, SSD, or even a combination of both.

Depending on how many total servers you have to convert, you could be looking at placing them all onto two or three new host servers. I would recommend three even if two will more than do the job. This is for HA and to protect you against overtaxing a host when one goes down (even for maintenance)... I've had 50+ VM's running on three host servers without the hosts even noticing...

One common item you'll notice pretty early is that you'll start to run short on system memory (for the hosts) long before you run low on processor power. For example, in a three R710 host cluster (48GB RAM each, running dual E5620 Xeon's) we were at 70% memory used while processor usage was typically in the 3%-5% usage range. Storage on the SAN wasn't an issue, since we had a 16TB array (10.4TB usable under RAID 50) for LUNs...

Make sure the network segment is also properly configured for the environment. Make sure you have enough NIC's to do the job (with iSCSI, I wouldn't go with less than 8 network ports per host)... I would also go with dual switches to ensure no single point of failure (you can use interconnects between the switches to also ensure high traffic speeds)...

So, it really does depend on what hosts, storage, and networking you get before you can have any idea of how many VM's you'll be able to fit into the environment. You could easily see 15 VM's per host, if not more, but maybe less...

VMware VCP4

Consider awarding points for "helpful" and/or "correct" answers.

Reply
0 Kudos
ANorton
Enthusiast
Enthusiast

Thanks for the answers.

I currently have some R710 servers with 48GB memory and dual X5570 processors in them which you can't get anymore. Nasically I am trying to figure out if it is worth going to R810 with the faster processors and more cores.

Has anyone noticed a performance hit when hitting a certain memory point? It has been a little while but I think the process was 4-6GB per core?

Reply
0 Kudos