vSphere 5 Licensing

vSphere 5 Licensing

So its finally here. vSphere 5 has safely landed and all the rumors have been true regarding storage DRS, OSX etc etc. However one thing that is a curve ball for me is the modified licensing model. It’s interesting how VMware is learning and evolving with everyone else. You no longer have the 6-12 core limitations on your CPU. HowevervRAM is introduced and this is an interesting move in my opinion. Read more

Comments

In the past we've never looked very deeply into vmwares competitors. But thanks to that licensing we've started to build a lab for xen and one for windows. I still hope what vmware thinks about keeping the vsphere 4 type of licensing. We have a 24 socket structure with 128Gb hosts and for nearly 50K€ u get lots of additional services from vmware competition.

Please read my latest post regarding the licensing here. I hope model such as the one I have discussed will probably be better for both VMware and the customers

http://www.cloud-buddy.com/?p=413

I think the concept of what VMware are doing is a logical conclusion to the ever increasing numbers of cores per CPU, especially the new 12-core AMD CPUs. A Dell R815 can have 48 cores with just four VMware licenses (vSphere 4). It's a no-brainer to see that VMware would at some point change their licensing model.

The only way I see the new model working is for them long term is to lower the cost per processor license a little, to get everyone on-side.

A four Socket 12-core Dell R815 can have 512GB RAM (and 48-cores!), which would mean to fully utilise it with VMs and memory I would need 16 Enterprise licenses or 11 EntPlus licenses. That's a HUGE increase in costs. Enterprise license list price is a touch under US$3000/license. So, worse case, I'd need to buy 12 more licenses = $36,000!!!! Ouch.

You can pool excess RAM across the whole vCenter server, so that will cut down on the number of licenses required overall, but it's still a huge wack for a lot of people, assuming they're maxing out their entire environment (not a wise thing to do, by the way.)

OK, so that's tablog journalism... now for a real case with Enterprise licensing (probably the most common.)

34 Host CPU's x 32GB vRAM = 1088 GB vRAM.

Current total allocation of vRAM (RAM assigned to each VM) = 564 GB @ 250 VMs = ~ 2GB/VM.

So I'm actually about 50% of my total quota of vRAM without the need to purchase any additional licenses. Smiley Happy

What are other people's real world figures?

In one of my recent blogs, I discussed the same issue that you have raised. Will we put VMware out of business if we continue to multiply out core counts and divide the number of hosts and stuff 'em up with memory. You are right, the push back is from folks who have oversubsrcibed. Also the fact that in your case if you had a total of 1280GB of pRAM and the licenses only allowed you to use 1088GB, you may not be a happy camper. I think if VMware revises their vRAM entitlements, this model may not face as much criticism as it has faced in the past few days. VMware needs to shift to a model such as this and this needs to be understood. If other tools are not doing that today, dont be in a rush to switch platforms because its only a matter of time before they start introducing such licensing as well. So still with the superior product which is certainly VMware.

My numbers are very similar.  After you make allowances HA and DRS so you don't over run your memory in the event of a failure there seems to be plenty of vRAM available.  In my case, my production VC has 16 host and Enterprise Plus licensing for 36 sockets.  That gives me 1728 GB vRAM and I have 576 allocated to 200 guests.  That's an average of 2.8 GB per guest.  My primary cluster is 8 host, 16 sockets, 96 GB per host with an average of 64% memory utilization per host.

Admittedly, 6 of my other hosts are way under utilized but I'd have to stuff a bunch of guest on them to triple my utilization and get close to my vRAM limits.

We are running at about 25% of the vRAM capacity of our licensing.  Granted we just brought online another cluster but even fully utilized will only put us at about 40%.

It's not a sticking point for us.

From the contributions above i raise a doubt: How is the total of ram u can allocate calculated? To make a real world example: we have 24 sockets with enterprise licenses managed with one vCenter but splited in 3 clusters (10+4+10). Ten of this sockets are for DR. Do i have to calc 24*32GB or 10*32Gb in Cluster 1 + 4*32Gb in cluster 2 + 10*32Gb in Cluster 3?

Because this would make from my pov a difference. Not a big difference, because 32Gb per core would mean to raise not only license costs, but also power & cooling.

From what I understand, licensing crosses vCenters if the vCenters are in link-in-mode.  The calc should be 24*32.

That is my understanding as well.  I have 2 VC's in linked mode and one of them is DR.  Technically adding that to my vRAM pool effectively doubles the numbers I posted above.

As for calulating the allocated vRAM just add up the RAM configured on each guest.

Yeah, that's right. It is: "The vRAM entitlements of vSphere CPU licenses are pooled—that is, aggregated—across all CPU licenses managed by a VMware vCenter instance (or multiple linked VMware vCenter instances) to form a total available vRAM capacity (pooled vRAM capacity)."

I didn't realise it was linked mode instances (we've got two vCenter servers covering two physical datacentres), so that acually increases our licensed CPUs (and increases our licensed VMs.)

The used vRAM capacity I was using was the RAM allocated when you edit the settings of a VM. I just dumped the list view of VMs in our datacentre into excel and added up the column to give me a total.

We are not a large shop, but vRAM over commitment was one of the major reasons for us to choose to use VMware as our core infrastructure.

No longer.

We are at 87% of memory commitment (pRAM v vRAM), and looking to go higher.  We have many 2 cpu servers with 128 or 144 gb ram, and now we need to buy extra licenses for these??  No.  Not gonna fly.

I agree that over-commitment is now going to be taxed. I had designed our VDI infrastructure around a 25% over-commit, but now it looks like I will have to precisely measure the vRAM allocated to the guests. Trouble is, will we be given the tools to tweak our VM's or will we have to go third-party?

More importantly, how strict will the licensing be if I am building a dynamic VDI environment? Will that last user not get a VM because the licenses says I can't add another 768MB guest when I am at 15.5GB? Or will there be a grey area for transitional allocation?

I haven't seen anything on VDI licensing.  Is the VDI licensing changing as well.  Right now we are licensed per/VM on the VDI side.

vRAM is unlimited in VDI as long as you are using the vSphere Desktop License.

http://blogs.vmware.com/euc/2011/07/vsphere-desktop-licensing-overview.html

@aladd thank you for the link. I was basing my design on old VDI information (this has been a while in the making). I apparently missed the changes to View licensing and it definitely changes my plans for our design.

My non-VDI infrastructure, though, will still need to be micro-managed. I stand behind my concern for VMware supplied tools that will track and recommend memory settings. Not only would I want to push CPU utilization toward 100% (no wasted cycles), memory seems to be the next on the list.

@ChaosV, just be careful with driving for high CPU utilisation of the host at the expense of VM performance.

If all or most of your VMs are quite low CPU usage VMs, you may max out the number of vCPUs -> pCore and performance will be terrible before you get to 100% CPU usage on the host.

It takes time (far less now than a few years ago, but it still takes time) to swap a VM in and out of action. If you have too many VMs per pCore, this idle time may make up 30 or 40% of the total time. Your pCore/CPU usage will look like it's quite low (as it's only doing actual work for 60 or 70% of the time), but VM response is terrible because the Host is spending a large chunk of time just swapping VMs in and out of the CPU/Cache space.

We had this problem in our Lab environment, where most of the VMs are sitting idle and the environment was appalling to use. CPU was at 25-30% on the hosts and memory was at 60%. We added two more hosts and performance noticeably improved. This was about 5 years ago with the 2nd Gen AMD CPUs and we had 5 vCPUs->pCore. vCPU->pCore ratio has improved quite a bit with each CPU generation.

Also, we used some old, just about to be decom'd hosts, for our DR a few months ago and noticed the performance drop in the VMs once we had about 25 VMs on a 2CPU server (Dell 2950.) Memory was fine, CPU was pretty good too, both with plenty of room to move.

Keep an eye on CPU instruction latencies/queues using PerfMon within the VMs.

Cheers!

"probably be better for both VMware and the customers" ??

Who are you thinking of here?

VMware has always marketed their superior cost of ownership as one of the biggest benefits, along with the flexibility a virtual infrastructure gives.

Up till now, people have implemented this in various ways, but has always gained a cost reduction, compared to a physical infrastructure.

Now, this new model seams not to impact large scale businesses and ASP/Cloud providers in much a negative way. Maby this is because they have a more defined vHW vs. cost/charge internally or to the customer, that make this new license model less significant to their budget.

As a small customer, this is a killer for us. And I think that is what this discussion also shows.

We have a small ESX environment with 6-7 old hosts with about 50-VMs in all sizes. To further have a gain in virtualizing, we plan to reduce this to 3 large hosts with a lot of ram. Now this does not affect us directly, but to make this work, we depend on keeping this "tight" between ESX HW and the VMs HW

Our biggest gain on virtualizing is the fact that we could take "any host", without the need to care for its specific vHW requirements, and virtualize it, as long as we had the physical HW for this.

If we now have to rethink all this and reconsider if every server has a virtualization gain or not, depending on its vHW requirements, that is the same for us as reconsider the whole VMware/ESX environment against others.

We could now migrate all the big servers back to physical HW and keep all the small ones on a MS virtual server or something. Or a VMware Server. Or Xen. Or whatever, we could even scrap most of them, arguing that a dedicated server is to costly.

So, I see that this isn't a big deal to everyone. Maybe some even can gain on this.

Now I hope that those people also see that it's another side to this, and that it's mostly made up of smaller customers that actually have to consider changing platform, if the smaller licensing models isn't more beneficial.

As an example, maby it could include a larger initial portion av vRAM in addition to be incremented with every CPU or something.

Br.

Wow, I must say...

Complaining did help.

As a negative bastard ;), I still fear what comes in the next version. But hey, who can predict the future anyway!

And with this change in mind, let's hope that they'll listen to customers next time too 🙂

Br.

Version history
Revision #:
1 of 1
Last update:
‎07-12-2011 09:45 PM
Updated by: