Red Hat KVM versus Vsphere Virtualization

Now after the discussion is over against hyper-v, we have a new one.

I think this time about red hat virtualization will be not so easy.

Anyone allthough is doing comparision ?


•Consolidation ratios of more than 400 virtual machines with enterprise workloads running on a single server.

That is equal to the vmmark benchmark for 4 socket server with nehalem ex.

64 vcpus

Reliability, Availability, and Serviceability (RAS)

•RAS hardware-based hot add of CPUs and memory is enabled.

•When supported by machine check hardware, the system can recover from some previously fatal hardware errors with minimal disruption.

•Memory pages with errors can be declared as "poisoned", and will be avoided.

Virtualization Limits

Maximum number of virtual CPUS in Guest (x86) -- -- 32 32

Maximum memory in paravirtualized guest (x86) -- -- 16GB --

Maximum memory in fully virtualized x86 guest on x86 host -- -- 7.9GB --

Maximum memory in fully virtualized x86 guest on x86_64 host -- -- 16GB 16GB

Maximum number of virtual CPUS in Guest (x86_64) on x86_64 host -- -- 32 64

Maximum memory in paravirtualized guest (x86_64) on x86_64 host -- -- 80GB/unlimited --

Maximum memory in fully virtualized guest (x86_64) on x86_64 host -- -- 80GB/unlimited 256GB

Page 20 Future expectations

Red Hat claims RHEL 6 is designed to provide a focus on rock-solid physical computing, along with true virtual and cloud activity support. To that end, RHEL 6 includes kernel improvements for resource management, “RAS” (reliability, availability, serviceability), and more power-saving features. The KVM hypervisor can support guest operating systems with up to 64 virtual CPUs, along with 256GB of virtual RAM and 64-bit guest operating system.


0 Kudos
14 Replies


While Eric is a VMware employee (and biased), most of the points in his articles re RHEV are fairly accurate -

Start at his first article.

I see Citrix being more of a threat in many locations to be honest, then maybe RHEV or MS




Thanks for the info.

Very helpful are the rhev tags

All of the articel are bevor august 2010.

Now rhev 6.0 is out.

I want to true comparision between esxi 4.1 and rhev

The vm uses a Intel 440BX-based motherboard with NS338 SIO chip .

What does kvm use ?

"They decided not to expose the true underlying CPU details and advanced instruction sets. Instead, they take a more conservative approach and masquerade as an old Pentium II Celeron CPU — no pesky SSE4 instructions to deal with here:

From wiki kvm nothing is said, or is i440FX host PCI bridge and PIIX3 PCI to ISA bridge an indication that say are using a FX intel motherboard for the vm?

KVM has no EVC . That is absolut bad. At the moment we are moving vm's from HP DL 585G1/G2/G5 to HP585G6/HP585g7/Dell815 Host

In a homogeneous VMware ESX cluster, all CPU instructions in the underlying host CPU are exposed to guest operating systems. VMware vCenter Server also offers the state-of-the-art Enhanced VMotion Compatibility (EVC), allowing administrators to specify a baseline in a mixed cluster that maximizes use of most modern CPU features during transition to newer generation hardware.

0 Kudos

Advantage of KVM, say say they support intels mca

"Serviceability (RAS) features (e.g., hot add of processors and memory, *machine check handling, and recovery from previously fatal errors) minimize downtime"

Integrated Virtualization

Kernel-Based Virtualization

•The KVM hypervisor is fully integrated into the kernel, so all Red Hat Enterprise Linux system improvements benefit the virtualized environment.

•The application environment is consistent for physical and virtual systems.

•Deployment flexibility, provided by the ability to easily move guests between hosts, allows administrators to consolidate resources onto fewer machines during quiet times, or free up hardware for maintenance downtime.

Leverages Kernel Features

•Hardware abstraction enables applications to move from physical to virtualized environments independently of the underlying hardware.

•Increased scalability of CPUs and memory provides more guests per server.

•Block storage benefits from selectable I/O schedulers and support for asynchronous I/O.

•Cgroups and related CPU, memory, and networking resource controls provide the ability to reduce resource contention and improve overall system performance.

•Reliability, Availability, and Serviceability (RAS) features (e.g., hot add of processors and memory, machine check handling, and recovery from previously fatal errors) minimize downtime.

•Multicast bridging includes the first release of IGMP snooping (in IPv4) to build intelligent packet routing and enhance network efficiency.

•CPU affinity assigns guests to specific CPUs.

Guest Acceleration

•CPU masking allows all guests to use the same type of CPU.

•SR-IOV virtualizes physical I/O card resources, primarily networking, allowing multiple guests to share a single physical resource.

•Message signaled interrupts deliver interrupts as specific signals, increasing the number of interrupts.

•Transparent hugepages provides significant performance improvements for guest memory allocation.

•Kernel Same Page (KSM) provides reuse of identical pages across virtual machines (known as deduplication in the storage context).

•The tickless kernel defines a stable time model for guests, avoiding clock drift.

•Advanced paravirtualization interfaces include non-traditional devices such as the clock (enabled by the tickless kernel), interrupt controller, spinlock subsystem, and vmchannel.


•In virtualized environments, sVirt (powered by SELinux) protects guests from one another

Microsoft Windows Support

•Windows WHQL-certified drivers enable virtualized Windows systems, and allow Microsoft customers to receive technical support for virtualized instances of Windows Server

0 Kudos

What is meaned with 128/4096 Cores ?

0 Kudos

Page 17 of

3 years cost: 29940 $ versus 102482

Is this true, than vmware has to drop their price

Page 17 ist not any more true ONly 64 core, 512 GB RAM

With esx(i) 4.1 it is 128 Cores and 1 TB RAM

Page 17 Max guest configuration is true . Rhev beats vsphere.

But is it need ?

Most x86 do not scale above 4-8 cpu's.

What is a better design for a big application? 16 small 1 VCPU VM's versus One big 16 VCPU VM ?

Page 20: Today Max Configuration

Rhev 2.2 512 COres per Cluster

Vsphere ESXi 4.1 128 COres per Cluster

Page 20: Future Max Configuration

Rhev 2.3 (in 18 Month) 4096 Host

Vsphere Next Generation ???

0 Kudos

Page 46

When we talk about RHEV, we often talk about

“feature velocity.” Here I want to show you what that

means. After 7 years of developing their proprietary

virtualization solution, VMware has reached a certain

point in features and scalability.

RHEV has been on the market for six months, and has

already matched or surpassed VMware in many key


What's exciting is with the release of RHEL 6, we take

a quantum leap ahead in terms of scalability limits. All

of this within a year of entering the virtualization


Is this realy true , that rhev has quantum leap ahead ?

IF you look at vmware evc, vmware is well ahead.

They know that most X86 applications do to scale above 8 Cores.

So they look at improvements. EVC is truely a quantum leap ahead.

Do we see in 9 month a a quatum leap ahead at vmware world 2011 september , not 18 Month?

0 Kudos

Page 33 : THere is no advantage over VMware transperant page sharing

KSM or Kernel Same-Page Merging is a technology that allows multiple

applications in Linux to use more memory than is physically installed by

merging or deduplicating redundant memory pages.

The RHEV hypervisor with KVM allows us to use the same technology to

overcommit memory in virtual machines.

0 Kudos

The vmware esxi is 32 mb small. How small is the rhev hypervisor ?

The RHEV Hypervisor, which comes with the RHEV products, is a

slimmed down version of Red Hat Enterprise Linux with just enough

bits to run and manage virtual machines. It's extremely small, PXE

and SAN bootable. For Windows users, it's set-and-forget so you don't

need any Linux skills. It's also a small security footprint and a

persistent image.

0 Kudos

Why citrix and hyper-v are not a competor for vmware ?

It is only Intel and IBM.

Intel bought necleous hypervisor.

SO they can build a hypervisor on a chip/firmware.

I know the neocleus hypervisor is a client hypervisor, bought for intel it is a small step to go from client to server.

RTS Real-Time Hypervisor

Wind River Hypervisor

Processors: Supports single and multicore processors based on Intel and PowerPC architectures (other processors also available)

IBM has three system PLatforms . Mainframe, Power , X86

They bought PSI Solution and Transitive(NOw in POwerVM implemented)

They have Crossover VIrtualization. In my opinion the leader

So is red hat a competing hypervisor : no

red hat only supports X86 virtualization at the moment

Red Hat support as a physical system as OS for Mainframe and POwer. But not Virtual ?

Are they working on a Crossover Virtualization

0 Kudos

Does VMares VMotion provide VMotion from AMD to Intel or vice versa?

AMD & Red Hat Demo Live Migration Across Vendor Platforms

0 Kudos

what is the point of these links and examples? Do you want to show that now everyone should stop using VMware and go with RedHat product (which is not free)? If RedHat product would be so great, then VMware would not have that big market within virtualisation (imo).

[I]If you found this or any other answer useful please consider the use of the Helpful or correct buttons to award points[/I]
0 Kudos

No the opposite. The articels and url should help others, which have already vmware inside and have the every years talk about a new hypervisor in the company.

This makes no sense.

I stay with vmware hypervisor, till there is really a ultimate better solution.

0 Kudos

Hello people,

I'm seeing all the fights out here and i'm thinking your confusing a few chaps over here. I'm a new VCP but i've been working with vmware for a while now. To answer the question asked by meistermn, you have to underestand the benefits and drawbacks of both virtualization technologies.

So the fight is not really about RedHat vs vmware. Its Kernel-Based virtualization vs Bare-Metal Virtualization. So please keep that in mind. I've just installed the redhat virtualization and currently testing it so i will give a better feedback later.

RedHat have been working on geting the virtualization technology Firmly gripped into their kernel. You could see this with the move to purchase Qumranet Inc in 2008 almost the same move Microsoft made with the purchase of Kidaro, only that Redhat didnt pirate the defenseless firm, the way microsoft did kidaro. They seem to be in the right direction with theeir virtualization technology. It looks stronger than the previous version

From the links i'm geting, Redhat has the advantage of doing pretty unique stuff

  • Migrate live machines from AMD hosts to Intel and vice versa (see it here This is pretty neat.
  • RAS support feature for hardware
  • Improved security model of running VMs in a ring model (Ring 0 - Ring 3)
  • Multi-virtual environment capabilities (paravirtualization, Hardware assisted virtualization and KVM)
  • The Maintenance mode is much simpler than vmware in that when you click maintenance mode, the machines move off the server as compared to vmware's case of migrating/powering off the servers before doing a maintenance mode. Makes a lot of sense
  • Offcourse Hardware compatibility e.g. Asterisk Digium card, video cards etc arent a headache as compared to the ESX stack
  • VDI with SPICE protocol (see
  • Someone mentioned price?

And many other minor features that make KVM look cool like the search tab (see


  • All the features that are in RHEV are much more enhanced and were there in vmware(HA, DRS, HOST PROFILING, VMOTION, THIN PROVISIONING, RESOURCE POOLING, EASY CREATION OF VMs, TEMPLATES...)
  • Updates??? Turnaround time in case microsoft makes those big messes of publishing random updates?They havent talked about update manager
  • VDI. vmware view is so much more advance than RHEV desktop (thinapp for example)
  • Wide support of operating systems
  • Support in general for the infrastructure
  • Wide range of virtualization products to support certain functionalities(SITE RECOVERY MANAGER, HEARTBEAT, vCLOUD DIRECTOR, vFABRIC). This could be a disadvantage for some people arguing that vmware is going the microsoft route of developing and breaking product functionality so as to make profit, but I see it as an advatage so taht organizations may get what they paid for
  • Integration with third-party softwares and storage vendors through an api especially for backup such as veeam, community scripting with ghetto vcb, symantec, netapp, emc, Tivvoli, synology etc...
  • FREE ESXi compared to community version of REDHAT or CENTOS... If you want proof of simplicity you've got it there.
  • Management console still far superior

When purchasing a virtualization solution answer the following questions:

  1. What is your organization's DRP, BCP and BP. What are the SLAs you require for your staff
  2. What and how many applications do you wish to host with the virtualization infrastructure
  3. What type of support do you require for the virtualization infrastructure
  4. What added features will you be looking at with virtualization e.g.  vdi
  5. Integration with purchased backup agents such as veritas
  6. What operating system(s) will the applications be sitting on?

Thereafter think of the budget. RHEV seems nice and I will be looking forward to playing around with it over the next month or so. However RHEV has its own market. The RedHat Enterprise organizations and partners will benefit from this development, just as HYPER-V beenefited pro-microsoft organizations and partners. But make a note its still a bit new and will need a bit of time to develop in the virtualization world and to approach the strength of vmware and hyper-V. The cisco partnership will be an added value to their strategic growth as a virtualization technology.

VMware will remain at the top of its game so long as theey dont hike their prices. They may have to design a console that operates like microsoft operations manager that links all virtualization technologies at least to convince organizations to take both virtualization technologiesSmiley Happy

Message was edited by: anthony njoroge (vcp, mcse, ocp, patton certified voice professional)

0 Kudos

The point of these discussions, etc is Enlightenment Amigos.

I have implemented both vSphere/ESX and KVM Virtualisation and here are my thoughts.


- very mature product

- offers free hypervisors  - esxi, vmware server (EOL soon?)

- bare metal or hosted hypervisors available

- Very Nice GUI (vCentre)

- vCenter/HA/DRS features NOT free



- mature

- not really bare metal, it is a hosted HyperVisor

- Linux-Centric Skills Needed

- vCentre Like Manager available (aka RHEV) - w/c is NOT free too but less costly

- Good Basic Management GUI (Virtual Server Manager)  Available and Free

- Command Line tools - available and very rich

- HA Features are FREE

- CPU agnostic live virtual machine migration (AMD to Intel)

- No vMwareTools-like agent on the guests

- Datastores/Storage are far richer than vMware

My advice, if you've deep pockets - go with vSphere. If you've a well entrenched vMware fiefdom - there's no point battling it out. If YOU do not have a current virtualisation solution and you have good Linux knowledge - go with KVM on *any* Linux of your choice (Redhat of course is preferred).

I have built very highly available KVM Based HA CLusters at zero cost to virtualisation and HA software. I used CentOS 5.5, RHCS Suite, and iSCSI, FC and NFS storage.  On RHEL environments - I've also built HA KVM CLusters WITHOUT RHEV as the Advanced Platform Redhat Sunscription already entitles the end user to Redhat CLuster HA bis and KVM/Libvirt bits -- so I am not even talking about RHEV here which you do not really need if you will just be clustering a few physical servers and dozens of virtual machines. The built in libvirt and virt-manager GUI is more than sufficient and coupled with scripting - I have a very capable HA Virtualisation CLuster at a significantly earthly cost.

There of course other KVM "managers" out there that can do the job -- i.e. ProxMox but I digress.

0 Kudos