VMware Cloud Community
callmeseb
Contributor
Contributor

[Clustering] Single VM spanning across multiple physical hosts ?

Folks,

Is it possible to allocate to VM more resources than single ESX host have in order to  build single big VM out of many physical hosts?

For instance I have two ESX hosts with 2GHz and 16 GB DRAM each.

Would it be possible to run SINGLE VM which will be granted 4GHz and 32GB DRAM ? Or maybe 4GHz and 20 GB DRAM (to "borrow" some resources from other Host).

I'm reading about clustering/DRS technology in vSphere and it's not said explicite that it's possible. We always have references to ESX hosts - to dynamicly relocate from one Host to another etc.... DRS is introducing "global" scheduler but I'm not sure whether it works that way.

Any good links are much appreciated !

Sebastian

Tags (2)
11 Replies
MKguy
Virtuoso
Virtuoso

You can indeed overcommit memory, which means to allocate more memory to a single or all VMs than you actually have physically available on the host. Transparent page sharing, memory compression and ballooning can help achieve this with no or small performance impacts.

However, be aware that if the VM actually needs all this memory actively, ESX must resort to swapping to disk, which is a huge performance killer.

Read this excellent paper on memory management: http://www.vmware.com/resources/techresources/10129

You can't provide a VM with more CPU Ghz than the host actually has though. Your example stated "hosts with 2Ghz", I suppose you mean that's the clock speed of each individual CPU core and not the total clock sum of all physical cores, right? Each vCPU of a VM run on a single logical CPU of the Host system, so if you have a host with 4 physical cores and 2Ghz each, and you want to assign 4Ghz to a VM, you will have to provide it with 2 vCPUs.

Take in consideration though that if your application is not taking advantage of multithreading, it will of course only be able to really use the 2Ghz of a single core.

Read this excellent paper on the ESX CPU scheduler:

http://www.vmware.com/resources/techresources/10131

I highly recommend reading both of these papers thoroughly, if you want to understand how this stuff works.

-- http://alpacapowered.wordpress.com
0 Kudos
callmeseb
Contributor
Contributor

Thanks a lot. These are all good explanations. My context though was clustering - therefore I edited topic.

My concern is how clustering works. Whether we really can abstract ourselves from physical hosts by creating clusters and resource pools. If we really can - then single VM can be hosted on multiple physical hosts depending on resource availability.

Therefore I gave an example with single VM who "borrows" resources from two hosts simultanously.

If we really can share physical resources in such dynamic way then here are my further questions:

- what happens if one of physical hosts fails - what will happen to "failed" part of resources ?

Single physical host "overcommittment" is bit different story - off topic at the moment Smiley Happy

Sebastianh

0 Kudos
callmeseb
Contributor
Contributor

Looks like it's possible. However will be good to know practitioners opinion.

Here's what I found:

http://www.vmware.com/pdf/vmware_drs_wp.pdf

"If a cluster is valid, this indicates there are enough resources to meet all reservations and to
support all running virtual machines. In addition, there is at least one host with enough
resources to run each virtual machine assigned to the cluster. If you use a particularly large
virtual machine (for example, a virtual machine with an 8GB reservation), you must have at least
one host with that much memory. It's not enough if two hosts together fulfill the requirement"

Sebastian

0 Kudos
MKguy
Virtuoso
Virtuoso

No, it is simply not possible to span a single VM over multiple hosts. A VM can only run on a single host at any given time and can therefore never utilize more resources than this host can provide.

DRS is only about balancing multiple VMs over multiple hosts, so you don’t have a one host being beat down by a couple of resource-intensive VMs while other hosts are idling around all day.

-- http://alpacapowered.wordpress.com
0 Kudos
ChrisDearden
Expert
Expert

A Single VM cannot be running simultainiously on more than 1 host. The closest you will get is FT , where the VM is essentially Log shipped to a VM on another host , if the source Vm fails then the ghost one becomes live.

like with microsoft clustering , you are clustering for availability , not performance.

If this post has been useful , please consider awarding points. @chrisdearden http://jfvi.co.uk http://vsoup.net
bulletprooffool
Champion
Champion

The idea is nice - kind of a cloud setup, grid setup, but uinfortunately VMWare is not the tool for what you are trying to achieve.

VMware will only ever run the VM on one host (or in theory 2, if you are duplicating a VM using FT) but you'll never really be getting resources from 2 hosts into the VM at once.

the solution is unfortunately to either consider Grid computing or buy better hardware.

If you are running multiple clusters, you get for example get higher spec hosts (but fewer) and then host the active nodes of the clusters on different ESX Host and then the passive nodes on an ESX host that hists a different cluster's Active node.

If you are running Active / Active clusters, then the addition of resource will be due to the fact that you are running multiple VMs (on different ESX hosts) not because you are running 1 VM over multiple hosts.

One day I will virtualise myself . . .
0 Kudos
VirtualPat
Contributor
Contributor

Has this situation changed any after the release of vSphere 5? Can you configure a VM to utilize the hardware resources (RAM and CPU) of more than physical host at the same time as the originator of this thread asked? Or is the most RAM and CPU a VM could use still limited to the amount of that hardware installed on the physical host its currently running on?

Some of their documentation seems to indicate you can over-allocate memory to a VM, ie assign more RAM to a VM than is actually installed on a physical host. I thought they somehow clustered hosts together to pool resources for a single VM to facilitate such a scenario (a VM uses the RAM from more than one host) so I'm a bit confused here on how hardware resources from more than one host can be used by VMs.

mcowger
Immortal
Immortal

No - the situation hasn't changed.  I would be surprised if this changes in the next 5 years, honestly.

The comment you read about 'borrowing' is a reference to how the licensing works, not a technical capability

--Matt VCDX #52 blog.cowger.us
0 Kudos
admin
Immortal
Immortal

4 years later I am also curious whether there is a product that introduces an adequate abstraction layer over the CPU and Memory resources in a cluster (this way the VM is limited to the cluster's capacity, rather than host's capacity).

0 Kudos
mcowger
Immortal
Immortal

No product I'm aware of. There have been some attempts, but nothing that

pans out.

The issue is mostly around latency....latency within a host is on the order

of ~1usec or less....latency across hosts is > 10usec with the fastest

interconnects (infiniband, etc).

As the world has evolved, applications have changed to simply distribute

the work across hosts, so building VMs larger than a host is becoming less

needed.

On Tue, Oct 21, 2014 at 3:41 AM, miroslavdzhokanov <

--Matt VCDX #52 blog.cowger.us
Stormarov
Contributor
Contributor

The short answer is no.

VMware is not....or not yet capable of supporting deployments where a virtual machine is a conglomerate of multiple hosts resources.  If you could do this you probably wouldn't want to.  It would require every host involved to have a perfectly synched clock cycle and spend a ton of overhead keeping track of what the others are doing and waiting for everyone to catch up. Also most software that normal users would be using doesn't support that kind of parallelization anyways.  Even Oracle Databases have a limit on the number of cores, and ram it can effectively use in one machine and that could easily be served by one physical server.

Now if you are designing something interesting like Blue Waters supercomputer at the University of Illinois, then you would need a backplane similar to blade server chassis and a custom OS to take advantage of it. That means custom architecture and custom software. That huge $$$ and not accomplishable by one person.

Better forms of this sort of parallelization process for the real world would involve applications that tie other computers together to optimize performance of smaller parts of a larger process.  Charm++ is one example of this. This would allow a large paralyzed processing system without a custom OS that doesn't exist as a product available on the consumer market and can utilize a set of physical hosts that are not (necessarily) under one hypervisor.

Those are really only useful for working on one giant problem space like UPS or Amazon Shipping optimizations that look at every package on every plane, train, ship, or truck, at every location and take into the account the cost of fuel, personnel available and weather to optimize a global transportation plan.  (or other similarly large problem sets)

The best option for most use cases would be to create applications that are composed of separate web application and database tiers each with one or more VMs to support a load balanced workload. 

0 Kudos