VMware Cloud Community
PAckermann
Contributor
Contributor

PAYG with reservation possible?

Hi,

we have the following use-case:

We offer two different models for our vCloud:

Pay as you go -> no guarantes for any ressource

Allocation Pool -> pre-reservation for RAM and CPU + able to burst

The problem with the allocation pool here is, that I have to specify an upper boundary and a percantage, that is guaranteed for vRAM. From the perspective of vcenter, this results in a resource-pool, that  has an amount X reserved and an upper boundary of Y. The problem with this is, that the upper boundary counts in as reserved capacity. To get some real numbers, here's an example:

in vCD:

Allocation Pool1: 16GB with 50% guaranteed -> resulting in 8GB guaranteed vRAM + 8GB burstable

Allocation Pool2: 32GB with 50% guaranteed -> resulting in 16GB guaranteed vRAM + 16GB burstable

With this two pools I would reserve 24GB of RAM for my customers but vCenter sees this as 48GB reserved.

What we experience with this configuration is, that PAYG-users can't start their VMs even if there is enough RAM free in the cluster. The problem for the allocation-pool-users is, that they don't have real burstable ressources here, as they are limited to a specific value. What we need is some kind of PAYG with reservation, so that a user could get some pre-reservation for their vDC and would be able to burst the ressources to as much as there is physicaly available. Also in vCenter only the real reserved ressources should count-in into reserved capacity for the cluster. Is this possible as of now and if not, is this on the roadmap?

0 Kudos
13 Replies
admin
Immortal
Immortal

The actual answers here are fairly complex.  Frank Denneman and I have been working on a white paper that discusses much of this very issue.  The real challenge is you are not reserving vRAM.  You are reserving PHYSICAL memory in the cluster.  The 8GB of physical RAM you are reserving with a 16GB Allocation means the pool only has 8GB of reservation within the pool.   However I am not sure why vCenter is showing 48GB reserved, I have not see that.

The other key to remember is your Virtual Machines in that model are ALSO set to 50% reservation PER virtual machine.  Each virtual machine also requires reserved overhead in order to power on, and the Allocation pool model is not set to expandable reservation, so once the reserved space is used you are running into the issue.  You cannot change the Per VM settings in this model either.  Lastly, Admission control is the other factor not allowing these virtual machines to power on based on the reservations.

For your purpose using Reservation Pool may be the way to go.  With that model you set the reservation and the limit of the pool to be equal.  You can then do exactly what you want by controlling the PER VM reservation settings within that pool.  This allows you to control the reservations and over commit-ment of that pool.  You can control you're own opportunistic/burstability space on a per VM level.  You are correct that PAYG customer will nto be able to start VM's as with EITHER Reservation or Allocation the reservations for those pool are pulled out of the cluster's available resources so only those pools have access to them which is just how resource pools with non-expandable reservations work.

The other way to go is PAYG with memory reservations which will assign all VM's in that model a per VM reservation and that pool is set to expandable so it can use the total resources available in the cluster.  I suspect at the moment you are in a situation where some of the settings have actually painted you into a corner in a sense.  We are trying to get this paper out soon.

0 Kudos
PAckermann
Contributor
Contributor

Now I'm a little bit confused... all settings in vCD are stating limits for this OrgvDC, so I assume they are valid for the resource-pool, but you are saying these limits are per VM? This doesn't make really sense to me. If this is working this way, I would never be in controll of the physical resources. I want to charge my customers if I pre-reserve them physical resources + charging them, if they actually utilise them. As I never now, how much VMs they will provision (that is what cloud is about, right?), I am not really able to charge them properly, am I right?

0 Kudos
admin
Immortal
Immortal

The terms are confusing indeed.  In that model the "Allocation" equates to the limit on the resource pool.  The "% Gaurentee" is the reservation used on the pool AND any virtual machines in that vDC.  Yes the Allocatio pool model sets limits AND reservations PER virtual machine on memory only based on the vDC's % Garuentee.  See the article I wrote showing similar screen shots.

http://www.chriscolotti.us/vmware/vcloud/vcloud-allocation-models/

So in your example of 16GB allocation and 50% Gaurentee you get  pool with 16GB Limit and 8GB Reservered.  That also means the pools "Available" reservation is also 8GB.  If you deploy a VM with 4GB of memory (Limit), it too will get 50% (or 2GB specific to that VM) reserved.

This will reduce the Available reservation in the pool by 2GB as you can see by the screen shots.

once you use up the available reservation to power on virtual machines of say 4GB each, they can use the opportunistic space to get the rest of their memory.  Once they use that up then swapping and balooning will occur.

In the case of PAYG all the VM reservations are pulled from the parent cluster since the pool itself is set to expandable reservation.  This is really deep hard core knowedge of how resource pools and VM with reservations and limits work.  vCD simply creates the objects with the settings, but vSphere has always worked the same under the covers.

Hence why Frank and I are 30 pages into the white paper on this topic Smiley Happy

You are able to charge them properly, they key is creating their org vDC based on the requirements they think they will need.  If they have no idea how many VM's they will use then PAYG may be best.  If they also say they need memory resources pre-allocated, then PAYG with gaurntees.  You have to work with the consumer to make sure you give them the space that will work for them.

If they just want a lump of resoruces and want to control the per VM reservations....reservation pool is the best way to go along with a TOTAL virtual machine cap on the pool to prevent them from completely over allocating their vDC if they set no per VM reservations.  Then they pay for the lump of reserved resources the same each month regardless if they use/allocation them all.

I have always said Networking and the Allocation models are the key things to really know for a vCloud Design Smiley Happy

0 Kudos
PAckermann
Contributor
Contributor

OK, I think I understood now how PAYG and allocation pool works, but I'm still confused about resource pool. What exactly is reserved here? If I configure a RAM-allocation of say 8GB, is this for every single VM that will be started or is this the complete reservation for this pool? If this is the later (I hope so), this would be my requested PAYG with reservation, right? AFAIK I can't change the type of the vDC (from allocation to resource...). How could I then move my vApps from one vDC to another? Would I have to use vCloud connector for this? Or would this be more of: Make a template out of the vApp, delete the vApp and deploy it from the template ot the new vDC?

EDIT: Found how to move vApps to another vDC...

0 Kudos
admin
Immortal
Immortal

Reservation pool is an easy one

Allocation = 16GB ram

Resource pool gets configured with 16GB limit AND 16GB Reservation non expandable oin the pool.  This means the Available reservation is in fact 16GB on the pool

Then the consumer can edit each VM and control the per VM resources themselves.

It's all in my article explaing the settings..The pending White Paper will go much deeper

http://www.chriscolotti.us/vmware/vcloud/vcloud-allocation-models/

vApps just need to be powered off and a new Org vDC created for the org so they can be moved to a new vDC assigned to that org.  However!  Moving to a reservation pool will MAINTAIN any per VM settings applied by the PAYG or AP models.  I have that pointed out in the article as well.

0 Kudos
PAckermann
Contributor
Contributor

Thanks for the link. So in Resource-Pool the reservations are per vDC (pool), not per VM in this vDC. And I am also able to burst this limits as consumer if the physical resources are available in the cluster. So for example:

If my Resource-Pool vDC has 8GB vRAM reserved, I am able to provision and start vApps with a total of say 64GB as long as the RAM is physicaly available in my cluster.

Sorry for asking this on and on, but i have to get this 100% clear. We are already in production with our vCloud, but there are not that much customers, so now we have a chance to fix this quite easily. If we have some 100 customers, things can get rather complicated Smiley Wink

0 Kudos
admin
Immortal
Immortal

Not quite.  The CONSUMER can change the per VM settings of the virtual machines in that pool, but the vDC settings are set on the pool itself.  You can read this as well in the article.  This is something you should configure and see for yourself as well.  It's fairly easy to understand if you set it up in vCD and then look at vCenter.  The trick again is if you MOVE a vAPP from PAYG or AP to Reservation pool whatever per VM settings were assigned from those models will carry over to the Reservation Pool on the VM.

0 Kudos
PAckermann
Contributor
Contributor

Again confusing... I played a little in vCD and created a vDC Resource-Pool with an allocation of 8GB RAM. I then provisioned a VM with 16GB vRAM and could start it without problems... so for me this looks like I have a fixed reservation of 8GB for this vDC but no real limit (except physical ones) for using more... Or will I have problems when the VM really utilises more than 8GB RAM?

0 Kudos
admin
Immortal
Immortal

It will start becuase you did not set any PER vm reservations.  This means that the VM only needed the overhead reservation to boot, and it could consume up to 8GB of actual RAM then start swapping for the other 8GB.  VM's need overhead reservations to boot and a 16GB VM needs about 512MB of reserved pool memory to start assuming it is a 2vCPU 16Gb ram.  If nothing calls for the rest of the 16GB then it will not swap.

You do have an 8GB limit on the pool itself, but the VM is not set to reserve any memory except what the overhead is required.  This means the VM will start and swap to get to 16GB of ram, BUT you have not reserved any of the 16GB to the VM itself, so it has no gaurntee.  The pool is limited but if no VM's have reservations set then yes you can over commit, but the user controls the over commitment, however if they do, they will hit your storage for swapping.  Booting requires overhead PLUS vm specific reservations which most people forget about

If using the reservation pool then the consumer....or you....need to set the per VM reservations.  Eventually here will not be enough reserved overhead for more VM's to boot.  This is also where adding a limit of total VM's comes in handy to prevent too many VM's in the pool.  'Limit" does not mean the customer is limited to what can power on unless you set reservations on the VM's.  Set that 16GB vm to 16GB reserved and it will not power on as you will exceed the total when you include the reserved overhead.

As frank always says this is why limits DO NOT equal reservations, once does not control the other.  Yes, it is true that VM's in a reservation pool with no per vm reservations will power on and NOT until the total overhead consumes all the available reservation will the next one not power on.  Using Reseration pool requires someone set per VM reservations through vCD.

I'd suggest you really read some of the deep dive information on vSphere resource schedulaing as well as the vSphere resource management guide.  Leave vCD out of it, it is all about how vSphere manages pools, reservations, limits, and overhead reservations.  This is pure vSphere resouce allocation stuff you need not use vCD to know how vSphere reservations work.  The nature of vSphere is to always overcommit unless you set reservations on each VM.  This is why the other models do that for you where the reservation pool model you need to control it yourself.

Some common sense should dicate that if you have an 8GB pool and you put a 16GB VM even with an 7.5GB rservation the VM will start (To account for overhead).....however it will swap out based on the memory limit of the pool to get the other 16GB since there is no place else to get physical ram.  Once you do that no other VM's will start since there is no more available reservation in the pool.  The Danger in the reservation pool is it can be easily over committed which is why you can set a maximum number of VM's.  You should also ensure that your customer does not create things that are bigger than their available pool of resources or let them know they might see swapping and performance impacts.  There is some level of knowledge transfer with the more advanced configurations. 

This is why most people do PAYG becuase it is easy.  Everything gets a set amount for every VM.  What requirements are driving them to the other models?  If it is they want control so be it RP works.  Allocation models should be picked based on the consumer's functional requirements for their VM's.  If they just don't care about performance or number of VM's = PAYG with 0% gaurentee.  If they want the best performance and want no number of VM's limit = PAYG with 100% Gaurntee.  There is no cookie cutter solution for every customer.

0 Kudos
PAckermann
Contributor
Contributor

Thank you again for your detailed answer.

So at the moment my options are quite limited. Let's see, if I can get it sorted right:

- If I want to utilize my physical resources properly, allocation pool is not an option for me

- using reservation pool would allow the consumer to overcommit, but it could break my storage because of swapping

- using PAYG with 0% reservation could have the same problems as with reservation pool

- using PAYG with 100% reservation would reserve 100% of the resources of a running VM but won't pre-reserve any resources for the pool

As we don't want to offer our customers any poor performing VMs (overcommitment, swapping,...), I can only offer PAYG with 100% reservation at the moment and I have to drop any pre-reserved pools. So my original question is still the same: Are there chances that we can get a PAYG-model with additional pool-reservation?

0 Kudos
admin
Immortal
Immortal

Allocation Pool with 100% Guarantee is effectively what you are asking for.  you will have 100% of the pool reserved AND 100% of the per VM memory reserved for every VM deployed.  Configure it that way and you will see you just end up with no burstability space.....but why would you need it then if everything is 100% reserverd right?  Smiley Happy  There is NO per VM CPU setup though.....just memory.

No there is no such thing today as a PAYG with pool lever reservations, and unfortunately I would not be able to comment on any future possibilities.

0 Kudos
PAckermann
Contributor
Contributor

We have done now some additional testing with reservation pool, and now my confusion is absolute perfect:

The vDC is configured as reservation pool with 8GB RAM. We have provisioned 2 linux-vms with each 16GB RAM and 1 vCPU configured. We could start both without problem (so far I understand why, they don't use the confugred RAM, yet). After the VMs came up, we installed "stress" and started it on both with the following parameters: stress -m4 --vm-bytes 6G

This results in a stress-test, where 4 processes on each VM start to fill the RAM with each 6GB. We monitored this via htop, that showed us, that in both VMs there were about 13GB RAM in use, each. So a total actual usage of about 26GB RAM + overhead in a reservation-pool of 8GB. To make the confusion perfect, both vCD and vCenter show us a usage of 2,31GB! We checked on our NetApp-Filer also, if there is additional load because of swapping... nothing, the filer is bored, no additional IOPS, no increased latency, no additional reads or writes to disk, so no swapping at all.

We will try this out with allocation-pool next week, but could you try to explain the above mentioned behaviour? Somehow this is completely different from what you wrote until now.

0 Kudos
admin
Immortal
Immortal

Did you look at the vSPhere metrics for Ballooning and Swapping and Sharing?  Were there any per VM reservations on the two VM's?  What does the resource tab show for available reservations on the pool and used rservations?  What are the charts for ballooning swapping and SHARED memory showing? vSphere manages memory in multiple ways when there are no reservations set per VM to specifically grant it physical memory.

You are not counting for all the things vSphere does to handle memory and keep the 8GB pool under wraps.  It may still have reserved memory available if there is no per VM settings for reservations.  You need to keep playing with it but this is working as expected and eventually you will see these things under load AND contention. 

At this point I think you have more than enough information to keep testing and decide which models work best.

0 Kudos