Why does VMware still have a 2-socket minimum when a single-socket quad core system more than meets the technical requirements?
I have two scenarios in my environment that are challenging me.
Scenario 1 - Branch Office:
I have a branch office with 4 VMs and local disk. The actual server requirements are light and a single socket, quad core CPU is more than enough to meet the demand. I can save money by going with a single CPU but VMware only supports multi-socket installs. Why? I can understand a 2 core minimum, but not a 2 socket minimum.
Scenario 2 - Data Center:
I currently have 3 Dual Socket, Dual Core systems each with 8GB of RAM. My CPU is sufficient, but memory is limiting what I can do with virtualization. I have some Cognos BI servers that I would love to virtualize, but they need 4GB RAM each. Based on VMware's current support requirements it would be very expensive to virtualize them.
In an ideal world I would like to buy single socket quad core systems with 16GB of RAM. I could go from 3 servers to 6 and from 24GB to 96GBs of RAM with no change in the number of CPU cores.
A quick check at Dell shows that I can buy a single-socket, quad-core system with 16GB of RAM for $5,700 whereas a dual-socket, quad-core system with 32GB of RAM would cost $29,500. In this case the cost benefit of virtualization doesn't exist and I am back managing physical hardware again.
In such a competitive virtualization market place I hope that VMware maintains its technical lead, but more importantly that it stays flexible with licensing and support models. Competition is coming fast and furious and I don't need another reason to be forced to shop around.
Will VMware update their support model to address the changing market place?
Thanks,
Glen.
Hi got the following response from VMware:
You may run a single cpu, but you will still need one
2-cpu license
for that system since you must have 1 license per
server. It would be better
to have 2 dual-cores or 2 quad-cores, since we count
physical sockets not
cores, and a single ESX license would license you in
either situation.
This is a killer
I've had the same question for our scenario but wasn't sure how clued up our reseller was.
We're going for HP c-Class blades for various tasks:
2 Full height blades for ESX, each one 2 x quad core 24Gb ram (chosen for quad nics). Poss a third later for FT. Realistically only gonna be running 30-40 vms, current physical usage is <5% cpu, < .5Mb/s network during working hours. Memory usage also minimal, no bad peaks of utilisation.
6 half height blades, 4 Citrix PS4, 1 Exchange 2003 (later poss 2007) and one spare/management blade.
Each h/h blade will be standardised as, 1 x quad core, 8Gb mem (I know we won't utilise all mem if we go 32bit Citrix but a) we've not decided that yet and b) we feel it's more manageable to just have two server specs).
My thinking was that we could, potentially, have some "extra" licenses to kick in the h/h blades as "emergency" ESX hosts if necessary so was hoping we could divide CPU license pairs across single-socket servers but this proves definitively that we can't ![]()
Our site is going a bit ass-about-face about the "planning" though - they've forced us to spec all the servers before trialling/piloting anything on the new hardware. Now I'm reading into stuff, I think we'd probabably have been better of ESX'ing citrix too - we currently have 200 users, max would be 400 if we got shot of ALL "fat" pcs. I reckon two of these blades would eat that for dinner, perhaps a third at a push for fault-tolerance.
As an aside, only reason we've gone for physical blade for Exchange is MS' policy of not supporting virtualised mailbox servers. Again, our usage is minimal - currently 15Gb mailstore, msg/sec in single figures even at "peak" times. I notice many people seem to be virtualising it though...
Regards,
Paul
