VMware Cloud Community
SuperSpike
Contributor
Contributor

vSphere 5 Licensing

I took a minute to read the licensing guide for vSphere 5 and I'm still trying to pull my jaw off the floor. VMware has completely screwed their customers this time. Why?

What I used to be able to do with 2 CPU licenses now takes 4. Incredible.

Today

BL460c G7 with 2 sockets and 192G of memory = 2 vSphere Enterprise Plus licenses
DL585 G7 with 4 sockets and 256G of memory = 4 vSphere Enterprise Plus licenses

Tomorrow

BL460c G7 with 2 sockets and 192G of memory = 4 vSphere Enterprise Plus licenses
BL585 G7 with 4 sockets and 256G of memory = 6 vSphere Enterprise Plus licenses


So it's almost as if VMware is putting a penalty on density and encouraging users to buy hardware with more sockets rather than less.

I get that the vRAM entitlements are for what you use, not necessarily what you have, but who buys memory and doesn't use it?

Forget the hoopla about a VM with 1 TB of memory. Who in their right mind would deploy that using the new license model? It would take 22 licenses to accommodate! You could go out and buy the physical box for way less than that today, from any hardware vendor.

Anyone else completely shocked by this move?

@Virtual_EZ
Reply
0 Kudos
1,980 Replies
pita_lbc
Contributor
Contributor

This was problem of my setup, and you can find solution few posts under:

----------------------------------------------------------------------------------------------------------

Just one "funny" setup with Essentials Plus licence and it's 192GB of vRAM:

Our customer hase three two socket servers with 128GB each (memory is cheap, so why not to buy it). It was ideal customer for VMware Essentials Plus.

Two servers are productional in HA and third is in DR site, where is no virtual machine running, just replicating machines from primary site.

             primary site                                                                                            DR site

SRV1 <==== HA =====> SRV2      <----- Veeam Replication to DR site------>     SRV03

And do you know how much memory we can entitle for virtual servers on primary site? The answer is:

We can assign just 32GB of vRAM

32GB is enough for company with one hundred emplyees and one Exchange server, with two SQL servers, and with seven another servers with AD, DNS, security and application servers.

Interesting is, that if I disable HA, I can use 128GB vRAM on primary site.

Thank you very much


Message was edited by: pita_lbc

Reply
0 Kudos
GeelongRob
Contributor
Contributor

I'm no expert, but this may be a config issue.

I am using Essentials Plus.

My primary site is similar: 2 dual-socket hosts:

SRV1 <==== HA =====> SRV2

vSphere HA state is "running" and vMotion Enabled is "yes"

My 'licensing | reporting' tab tells me I am using 32% of my vRAM (64GB of 192GB)

So, use that tab. Be sure to click on the vRAM hyperlink in left column of Product Chart.

Then drag open the details pane at the bottom. This may give you an idea of what is going on.

Reply
0 Kudos
JoshuaAndrewsVM

>pita_lbc

Sorry for your troubles, but something is not configured properly in your environment.  Did you have the DR VM added to the HA cluster?  Does it have 32GB of RAM on it? Did you have admission control and "allow 1 host failure" enabled?

That is the only scenario I can think of that would cause your issue.

Did you call VMware support?

Reply
0 Kudos
pita_lbc
Contributor
Contributor

Hello, thank you for respond.

The SRV03 is out out of HA cluster because of there is not a shared storage between primary site and DR. It is just connected to vCenter for management:

HA_conf.JPG

As I said, all servers has 128GB of physical RAM. There is no problem with HA setup. The configuration finishes without any problem.

But when I want to add memory to some server and start it so:

ha_res.JPG

Reply
0 Kudos
JoshuaAndrewsVM

Please post a shot of the summary tab of the cluster.

Thanks

Reply
0 Kudos
rickardnobel
Champion
Champion

Josh Andrews wrote:

Please post a shot of the summary tab of the cluster.

And also the output from Advanced Runtime Settings from HA.

My VMware blog: www.rickardnobel.se
Reply
0 Kudos
pita_lbc
Contributor
Contributor

OK I think, that I see the problem, but why it is so limiting?

HA_summary.JPG

HA_ARI.JPG

Reply
0 Kudos
JoshuaAndrewsVM

You have at least one VM with 8 vCPUs and one VM with a 12GB reservation. 

Change the admission control to % instead of # of hosts.

HA is ensuring you can power on all of the VMs on one host if that host fails.  It does that (in # of of hosts mode) by using a "slot" system.  a VM slot is the largest CPU reservation of any VM and the largest RAM reservation of any VM (unless you change the advanced parameters).  That slot size is used to determine how many "slots" the largest host can hold - and that number of slots is reserved acrros the cluster. Since you have 2 hosts, that means neither host can use more than 1/2 the slots on it - and the slots are 8vCPU x 12GB wide per your diagram.

"per host" works well when all the VMs and their reservations are about the same size.

Change the admission control to % instead of # of hosts.

Reply
0 Kudos
Rumple
Virtuoso
Virtuoso

PS – only use reservations if you really require them, otherwise let the operating system manage the RAM reservations.

Reply
0 Kudos
aroudnev
Contributor
Contributor

THis s why we did not migrate to Vmware 5 and do not plan to do it during next (at least) 3 years.

Reply
0 Kudos
JoshuaAndrewsVM

>This s why we did not migrate to Vmware 5 and do not plan to do it during next (at least) 3 years.

Slot sizes are not new or even changed for vSphere. "Host Failures" is the only admission control policy that did not functionally change during the HA rewrite for 5 - "% of resources" was separated into memory and CPU and "Specify failover hosts" now supports more than one.

Reply
0 Kudos
rickardnobel
Champion
Champion

aroudnev wrote:

This is why we did not migrate to Vmware 5 and do not plan to do it during next (at least) 3 years.

These specific workings of HA is exactly the same in vSphere 4.x, so a misconfigured HA Admission Control setting is the reason for you not migrating to vSphere 5? It might take less than three years to configure it more properly. Smiley Happy

My VMware blog: www.rickardnobel.se
Reply
0 Kudos
bilalhashmi
Expert
Expert

aroudnev wrote:

THis s why we did not migrate to Vmware 5 and do not plan to do it during next (at least) 3 years.

Like the others have mentioned, this has not changed much. However with vSphere 5 there has been improvements. In vSphere 4 when using percentages, you were forced to use one percentage for both CPU and MEM. In vSphere 5 you can have seperate percentages set for both kinds of resources. This gives you more flexibility.

Like others have mentioned its, imp to understand the implications of the choices you make in your design, I wrote a post regarding this very issue a while back which I think might be helpful for you to understand how admission control settings could impact your design. http://www.cloud-buddy.com/?p=1176

Blog: www.Cloud-Buddy.com | Follow me @hashmibilal
Reply
0 Kudos
GeelongRob
Contributor
Contributor

... if you require them.

I couldn't agree more. With this very small number of servers (as per diagram), why complicate things unnecessarily.

Ours is a similar config - hosts, processors, less memory, a few more VMs (doing much the same thing as per naming convention).

I have never needed to touch reservations - given the resources available - something valuable I learned from the 5-day VMware 5 course.

Reply
0 Kudos
GeelongRob
Contributor
Contributor

Are you using Essentials Plus?

If you have a small/moderate number of VMs, I can't imagine why you would hold off.

The ability to live migrate a VM from one host to another (host only, not the datastores) makes it worth it.

Now I can choose a quiter time, migrate the VMs off one host, do the "reboot" VMware patches, migrate all machines the other way, repeat patches, then rebalance the load - and no one has suffered any disruption or downtime.

Our Exchange 2010 server is the only one I actually "take down".

Reply
0 Kudos
JoshuaAndrewsVM

Note that vMotion was available with Essentials Plus starting with vSphere 4.1 - A $10k option suddenly free!! Whoo hoo!  Smiley Wink

Reply
0 Kudos
Dracolith
Enthusiast
Enthusiast

Sorry,  what you are experiencing with your HA  error messages is a   configuration thing, not a licensing issue;  you could be licensed at  Enterprise+,  and you would still have the same problem.   You need to potentially look at your resource pools and virtual machines,

and massively reduce your largest memory reservations,  so the memory slot size will be reduced and you can power on more VMs.

With 128gb for 2 hosts and a 192gb  maximum vRAM   entitlement;  there is not much memory overcommitting to be done here,

assuming you have multiple VMs and maintain an appropriate cluster balance;  some VMs on each host,

a host should only reach  95% of 128gb   usage  if you have a failover.

Remember....  If you have  12GB  reserved on a VM  running on  host X.

Then for this VM to successfully failover,  there must be  12GB of memory 

PLUS overhead UN-reserved  on some other host  that is in the cluster.

E.g.  The cluster must have 24GB in total of memory,  for each VM with such a reservation.

If there are  11.9gb of RAM  left UN-reserved on the failover host,  and the primary fails,

HA will be unable to power on the failed VM, and the VM will remain down.


Therefore, a VM that has a reservation of 0.2gb of RAM is capable of stopping a VM

that has the reservation of  12gb of RAM to failover successfully.   To track this,

and make sure it doesn't happen the total system memory is divided by the largest

reservation, to calculate a number of  "slots",  and each VM uses a whole number of slots.

Even though the VM with 0.2gb of RAM has a small reservation,   it  does have a reservation,

therefore it requires a minimum of  1  SLOT.

Now you can go into the Cluster advanced settings  change your failover settings

to a percentage,  but this is much less-conservative than the slot based approach.

VMs with large memory reservations may be at risk of failing to failover successfully

Reply
0 Kudos
pita_lbc
Contributor
Contributor

Thanks all, it is OK now.

We have to use the reservation, because of Navision and its native database. This system has performance problems, when the memory is not preallocated.

Thank's a lot. Smiley Happy

Reply
0 Kudos
J1mbo
Virtuoso
Virtuoso

The reservation only does anything when the host is short of resources.  So understand that it will  negatively impact everything - most likely, including the VMs within the reservation - since it necessarily will increase paging load elsewhere and hence increase storage latency.

Reply
0 Kudos
pita_lbc
Contributor
Contributor

I am not solving performance problems. You can see our setup several posts higher.

The main reason for preallocation is Navision native database service, which has quite "stupid" memory management. I wouldn't use memory preallocation if I wouldn't have to.

I was solving  the licence, and finally, the configuration issues of customer's setup Smiley Happy

Thanks all for answers.

------------------------------------------------------------------------------------------------------

But still, there it is iridescent memory limitation. I don't undestand this. I would suppose, that according to competition VMware will free the RAM/vRAM limits. And they just make it tighter Smiley Sad .

Memory is one of the cheapest parts of the server today, and every two socket server can have more than 64GB RAM. Why should I buy for three servers enterprise licences for small business segment, just because of memory??? I would not use functions of Enterprise version, I just need memory and HA.

VMware pruduces such a good products, so our customers are ready to pay for them according to other "free" products. But it cannot be killing our budgets. I am able to push through software about 10-20% hardware price, but not 40-60% !!!

Thank you very much and have a nice day

Petr Janousek

Reply
0 Kudos