If I have 2 resource pools, one set to high shares, and the other to medium. No reservations or limits defined. When would they actually be competing for resources? Would it be if the host is running low on resources? If so, what is there to compete for? Whatever the host has left to allocate? I'm trying to get a good understanding of this.
Also, would it make sense to set resource pool limits and reservations in a cluster with hosts with mixed amounts of CPU and Memory? Let's say that I have 2 hosts with 64gb RAM and (2) 3.2ghz processors (Quad Core), and 2 more with same processors but 1/2 of the RAM. If I were to lose a 64gb host, wouldn't I be risking the stability of the resource pools by defining limits and reservations?
The share values would only kick in when the host was having resource contention issues. In this situation the shares designated with high would get first priority on the resources they needed while the medium designated shares would fight between what is leftover.
They would be competing if / when the host can no longer provide 100% of the allocated resources.
For simplicty sake, say you have a host with 100Mhz of CPu capacity (yes, I know thats small, but easy for the math ). You create 2 VMs, each with no limits, 1 vCPU, and shares as you describe. If the total MHz used by the 2 VMs together doesn't exceed 100Mhz, everything is kosher and no one loses.
However, lets say that now they both each want 100% of the CPU. Obviously they cant both have it, and so here is where shares kick in. Once the shares kick in, its basically like a rigged lottery, with the portions of the winnings controlled by the shares.
So if you had 1 high and 1 medium, the total shares would be 3000 (default #s), and so the 'high' VM would get 66% of their CPU allocation and the medium VM would get 33% (1000/3000 shares). The shares calculation are applied to any/all resources (CPU, Mem, Disk) that are under contention.
As for your second question - mixed clusters do make it a wrinkle, because, as you suggest, the shares will result in different amounts of effective CPU/mem/disk resources depending on the host. If you need to specify a minimum level of service, thats what a reservation is for.
Reservations should be used carefully.
Having said that...in your case if there some application which needs resource guarantee then you will to go for reservations but you will have to keep in mind that HA/DRS constraints are met. If you loose the 64GB host then you are at risk but in that situation also you are providing the resources to top teir app, rest apps - will have to risk their lives.
So it is pretty much the choice between high priority apps.
Basically, when your host runs out of resources the shares would kick in and give a guest priority to memory or CPU depending on your Resource Pool settings. Let's say you have two pools setup with one system in each pool, one with 8000 CPU shares and one with 4000 CPU shares. You're hosts is pegged for CPU but and the guests are asking for resources. The machines in the pool with more shares would basically get double the CPU time. It get's complicated when you put resource pools inside resource pools and the more guests you have in a pool the more those resources are distributed.
Personally, I think it's still a good idea to create resource pools if you have hosts with different amount of resources. Either way you still may want to make sure that certian machines get resource priority over others. You may have a SQL server that is business critical and say a fax server that is used twice a day. It'd be smart to give that SQL server access to the resources it needs and let the fax server suffer during the outage. Assuming you have HA and DRS setup properly you can set low priority guests to shut down in the even of a failure.
It can be quite confusing, and I'm no expert but it's how I understand it. Duncan Epping has some great articles on Resource Pools over on Yellow Bricks. Check here or search his site, lot's of great info out there.
so, if i understand correctly, resource contention state occurs when there are no more resources available in the cluster or host. And, instead of trying share the little availabe resources on the cluster/host, it instead takes resources from the vms in the cluster/host and divvys them up based on the shares values?
Yes, the shares would go into effect when the host is maxed out on resources. It doesn't really take resources from the guests but it will use the shared resources available on the host. The guest must also be located on the hosts it is getting it's resources from.
Keep in mind that if you setup a reservation for a guest the host will not take from that reserved pool. So if one of your guests you hard set a reservation of 1GB of memory, that guest is guaranteed the full GB.
when you say "when the host is maxed out on resource," do you mean when the host has allocated all of its resources or when it's actually using them? Because if it's using all of its resources, how can it manage resources it does not have? This is where I'm stuck.
When its actually using them.
Dont forget that ESX has a scheduler thats deciding what to give each VM every few milliseconds. So, to my earlier example, is 2 VMs want *all* the resources, 1 of 3 things can happen:
So, in the case above, if VM A wants all of the cycles of the host and so does B, ESX will only give 'A' 66% of the cycles on the host, and give B 33%. That adds up to 100% of the host's resources.
Its not managing resources it doesn't have - its limiting access to the existing resources in an appropriate way (as deemed by you by your shares choices) so that every VM gets what it can.
Shares will onnly be used when you have more demand than you have actual resources available.
VMware will limit the amount of resource presented to VMs that are in the resource Pool with fewer shares.
If you thinkl about a normal physical server, it can only deliver as much CPU / Memory as it has available, ESX will simply limit the amount of resource that it presents to the VM and the OS will be throttled, freeing up resource for other VMs.
Excess Memory will be written to disk (which the OS will not be aware of, so it continures to operate as per usual)
CPU will simply be scheduled on higher share VMS more regularly, meaning the low Share VM simply has to wait longer for access to a CPU.