Now after the discussion is over against hyper-v, we have a new one.
I think this time about red hat virtualization will be not so easy.
Anyone allthough is doing comparision ?
http://www.redhat.com/virtualization/rhev/server/
Features
http://www.redhat.com/virtualization/rhev/server/features-benefits/
•Consolidation ratios of more than 400 virtual machines with enterprise workloads running on a single server.
That is equal to the vmmark benchmark for 4 socket server with nehalem ex.
64 vcpus
http://www.redhat.com/rhel/server/details/
Reliability, Availability, and Serviceability (RAS)
•RAS hardware-based hot add of CPUs and memory is enabled.
•When supported by machine check hardware, the system can recover from some previously fatal hardware errors with minimal disruption.
•Memory pages with errors can be declared as "poisoned", and will be avoided.
http://www.redhat.com/rhel/compare/
Virtualization Limits
Maximum number of virtual CPUS in Guest (x86) -- -- 32 32
Maximum memory in paravirtualized guest (x86) -- -- 16GB --
Maximum memory in fully virtualized x86 guest on x86 host -- -- 7.9GB --
Maximum memory in fully virtualized x86 guest on x86_64 host -- -- 16GB 16GB
Maximum number of virtual CPUS in Guest (x86_64) on x86_64 host -- -- 32 64
Maximum memory in paravirtualized guest (x86_64) on x86_64 host -- -- 80GB/unlimited --
Maximum memory in fully virtualized guest (x86_64) on x86_64 host -- -- 80GB/unlimited 256GB
Page 20 Future expectations
http://www.rompalasbarreras.es/pdf-sp/RHEV-event-resentation.pdf
http://www.thevarguy.com/2010/11/11/red-hat-enterprise-linux-6-its-all-about-virtualization/
Red Hat claims RHEL 6 is designed to provide a focus on rock-solid physical computing, along with true virtual and cloud activity support. To that end, RHEL 6 includes kernel improvements for resource management, “RAS” (reliability, availability, serviceability), and more power-saving features. The KVM hypervisor can support guest operating systems with up to 64 virtual CPUs, along with 256GB of virtual RAM and 64-bit guest operating system.
Pricing
http://www.redhat.com/f/pdf/rhev/DOC108R7_Competitive_Pricing_Whitepaper_20101115.pdf
Thanks for the info.
Very helpful are the rhev tags
All of the articel are bevor august 2010.
Now rhev 6.0 is out.
I want to true comparision between esxi 4.1 and rhev
The vm uses a Intel 440BX-based motherboard with NS338 SIO chip .
What does kvm use ?
http://www.vcritical.com/2010/05/red-hat-enterprise-virtualization-pentium-ii-inside/
"They decided not to expose the true underlying CPU details and advanced instruction sets. Instead, they take a more conservative approach and masquerade as an old Pentium II Celeron CPU — no pesky SSE4 instructions to deal with here:
From wiki kvm nothing is said, or is i440FX host PCI bridge and PIIX3 PCI to ISA bridge an indication that say are using a FX intel motherboard for the vm?
http://en.wikipedia.org/wiki/Kernel-based_Virtual_Machine
KVM has no EVC . That is absolut bad. At the moment we are moving vm's from HP DL 585G1/G2/G5 to HP585G6/HP585g7/Dell815 Host
In a homogeneous VMware ESX cluster, all CPU instructions in the underlying host CPU are exposed to guest operating systems. VMware vCenter Server also offers the state-of-the-art Enhanced VMotion Compatibility (EVC), allowing administrators to specify a baseline in a mixed cluster that maximizes use of most modern CPU features during transition to newer generation hardware.
Advantage of KVM, say say they support intels mca
"Serviceability (RAS) features (e.g., hot add of processors and memory, *machine check handling, and recovery from previously fatal errors) minimize downtime"
http://www.redhat.com/rhel/server/details/
Integrated Virtualization
Kernel-Based Virtualization
•The KVM hypervisor is fully integrated into the kernel, so all Red Hat Enterprise Linux system improvements benefit the virtualized environment.
•The application environment is consistent for physical and virtual systems.
•Deployment flexibility, provided by the ability to easily move guests between hosts, allows administrators to consolidate resources onto fewer machines during quiet times, or free up hardware for maintenance downtime.
Leverages Kernel Features
•Hardware abstraction enables applications to move from physical to virtualized environments independently of the underlying hardware.
•Increased scalability of CPUs and memory provides more guests per server.
•Block storage benefits from selectable I/O schedulers and support for asynchronous I/O.
•Cgroups and related CPU, memory, and networking resource controls provide the ability to reduce resource contention and improve overall system performance.
•Reliability, Availability, and Serviceability (RAS) features (e.g., hot add of processors and memory, machine check handling, and recovery from previously fatal errors) minimize downtime.
•Multicast bridging includes the first release of IGMP snooping (in IPv4) to build intelligent packet routing and enhance network efficiency.
•CPU affinity assigns guests to specific CPUs.
Guest Acceleration
•CPU masking allows all guests to use the same type of CPU.
•SR-IOV virtualizes physical I/O card resources, primarily networking, allowing multiple guests to share a single physical resource.
•Message signaled interrupts deliver interrupts as specific signals, increasing the number of interrupts.
•Transparent hugepages provides significant performance improvements for guest memory allocation.
•Kernel Same Page (KSM) provides reuse of identical pages across virtual machines (known as deduplication in the storage context).
•The tickless kernel defines a stable time model for guests, avoiding clock drift.
•Advanced paravirtualization interfaces include non-traditional devices such as the clock (enabled by the tickless kernel), interrupt controller, spinlock subsystem, and vmchannel.
Security
•In virtualized environments, sVirt (powered by SELinux) protects guests from one another
Microsoft Windows Support
•Windows WHQL-certified drivers enable virtualized Windows systems, and allow Microsoft customers to receive technical support for virtualized instances of Windows Server
What is meaned with 128/4096 Cores ?
Page 17 of http://www.rompalasbarreras.es/pdf-sp/RHEV-event-resentation.pdf
3 years cost: 29940 $ versus 102482
Is this true, than vmware has to drop their price
Page 17 ist not any more true ONly 64 core, 512 GB RAM
With esx(i) 4.1 it is 128 Cores and 1 TB RAM
http://www.vmware.com/pdf/vsphere4/r41/vsp_41_config_max.pdf
Page 17 Max guest configuration is true . Rhev beats vsphere.
But is it need ?
Most x86 do not scale above 4-8 cpu's.
What is a better design for a big application? 16 small 1 VCPU VM's versus One big 16 VCPU VM ?
Page 20: Today Max Configuration
Rhev 2.2 512 COres per Cluster
Vsphere ESXi 4.1 128 COres per Cluster
Page 20: Future Max Configuration
Rhev 2.3 (in 18 Month) 4096 Host
Vsphere Next Generation ???
Page 46
When we talk about RHEV, we often talk about
“feature velocity.” Here I want to show you what that
means. After 7 years of developing their proprietary
virtualization solution, VMware has reached a certain
point in features and scalability.
RHEV has been on the market for six months, and has
already matched or surpassed VMware in many key
metrics.
What's exciting is with the release of RHEL 6, we take
a quantum leap ahead in terms of scalability limits. All
of this within a year of entering the virtualization
market.
Is this realy true , that rhev has quantum leap ahead ?
IF you look at vmware evc, vmware is well ahead.
They know that most X86 applications do to scale above 8 Cores.
So they look at improvements. EVC is truely a quantum leap ahead.
Do we see in 9 month a a quatum leap ahead at vmware world 2011 september , not 18 Month?
Page 33 : THere is no advantage over VMware transperant page sharing
KSM or Kernel Same-Page Merging is a technology that allows multiple
applications in Linux to use more memory than is physically installed by
merging or deduplicating redundant memory pages.
The RHEV hypervisor with KVM allows us to use the same technology to
overcommit memory in virtual machines.
The vmware esxi is 32 mb small. How small is the rhev hypervisor ?
The RHEV Hypervisor, which comes with the RHEV products, is a
slimmed down version of Red Hat Enterprise Linux with just enough
bits to run and manage virtual machines. It's extremely small, PXE
and SAN bootable. For Windows users, it's set-and-forget so you don't
need any Linux skills. It's also a small security footprint and a
persistent image.
Why citrix and hyper-v are not a competor for vmware ?
It is only Intel and IBM.
Intel bought necleous hypervisor.
SO they can build a hypervisor on a chip/firmware.
I know the neocleus hypervisor is a client hypervisor, bought for intel it is a small step to go from client to server.
http://simonbramfitt.com/2010/09/intel-acquires-neocleus-cloak-and-dagger-or-bait-and-switch.html
http://www.virtualizationdir.com/tag/neosphere-platform/
RTS Real-Time Hypervisor
http://www.intelcommsalliance.com/kshowcase/view/view_item/65e7a496750572b1ee339e934892855f5f16f76c
Wind River Hypervisor
Processors: Supports single and multicore processors based on Intel and PowerPC architectures (other processors also available)
IBM has three system PLatforms . Mainframe, Power , X86
They bought PSI Solution and Transitive(NOw in POwerVM implemented)
They have Crossover VIrtualization. In my opinion the leader
So is red hat a competing hypervisor : no
red hat only supports X86 virtualization at the moment
http://www.redhat.com/rhel/virtualization/compare/
Red Hat support as a physical system as OS for Mainframe and POwer. But not Virtual ?
http://www.redhat.com/rhel/compare/
Are they working on a Crossover Virtualization
Does VMares VMotion provide VMotion from AMD to Intel or vice versa?
AMD & Red Hat Demo Live Migration Across Vendor Platforms
what is the point of these links and examples? Do you want to show that now everyone should stop using VMware and go with RedHat product (which is not free)? If RedHat product would be so great, then VMware would not have that big market within virtualisation (imo).
No the opposite. The articels and url should help others, which have already vmware inside and have the every years talk about a new hypervisor in the company.
This makes no sense.
I stay with vmware hypervisor, till there is really a ultimate better solution.
Hello people,
I'm seeing all the fights out here and i'm thinking your confusing a few chaps over here. I'm a new VCP but i've been working with vmware for a while now. To answer the question asked by meistermn, you have to underestand the benefits and drawbacks of both virtualization technologies.
So the fight is not really about RedHat vs vmware. Its Kernel-Based virtualization vs Bare-Metal Virtualization. So please keep that in mind. I've just installed the redhat virtualization and currently testing it so i will give a better feedback later.
RedHat have been working on geting the virtualization technology Firmly gripped into their kernel. You could see this with the move to purchase Qumranet Inc in 2008 almost the same move Microsoft made with the purchase of Kidaro, only that Redhat didnt pirate the defenseless firm, the way microsoft did kidaro. They seem to be in the right direction with theeir virtualization technology. It looks stronger than the previous version
From the links i'm geting, Redhat has the advantage of doing pretty unique stuff
And many other minor features that make KVM look cool like the search tab (see http://www.youtube.com/watch?v=n6DfoOrh-cs)
HERE"S WHATS KOOL ABOUT VMWARE
When purchasing a virtualization solution answer the following questions:
Thereafter think of the budget. RHEV seems nice and I will be looking forward to playing around with it over the next month or so. However RHEV has its own market. The RedHat Enterprise organizations and partners will benefit from this development, just as HYPER-V beenefited pro-microsoft organizations and partners. But make a note its still a bit new and will need a bit of time to develop in the virtualization world and to approach the strength of vmware and hyper-V. The cisco partnership will be an added value to their strategic growth as a virtualization technology.
VMware will remain at the top of its game so long as theey dont hike their prices. They may have to design a console that operates like microsoft operations manager that links all virtualization technologies at least to convince organizations to take both virtualization technologies
Message was edited by: anthony njoroge (vcp, mcse, ocp, patton certified voice professional)
The point of these discussions, etc is Enlightenment Amigos.
I have implemented both vSphere/ESX and KVM Virtualisation and here are my thoughts.
vMware:
- very mature product
- offers free hypervisors - esxi, vmware server (EOL soon?)
- bare metal or hosted hypervisors available
- Very Nice GUI (vCentre)
- vCenter/HA/DRS features NOT free
- COSTLY
KVM
- mature
- not really bare metal, it is a hosted HyperVisor
- Linux-Centric Skills Needed
- vCentre Like Manager available (aka RHEV) - w/c is NOT free too but less costly
- Good Basic Management GUI (Virtual Server Manager) Available and Free
- Command Line tools - available and very rich
- HA Features are FREE
- CPU agnostic live virtual machine migration (AMD to Intel)
- No vMwareTools-like agent on the guests
- Datastores/Storage are far richer than vMware
My advice, if you've deep pockets - go with vSphere. If you've a well entrenched vMware fiefdom - there's no point battling it out. If YOU do not have a current virtualisation solution and you have good Linux knowledge - go with KVM on *any* Linux of your choice (Redhat of course is preferred).
I have built very highly available KVM Based HA CLusters at zero cost to virtualisation and HA software. I used CentOS 5.5, RHCS Suite, and iSCSI, FC and NFS storage. On RHEL environments - I've also built HA KVM CLusters WITHOUT RHEV as the Advanced Platform Redhat Sunscription already entitles the end user to Redhat CLuster HA bis and KVM/Libvirt bits -- so I am not even talking about RHEV here which you do not really need if you will just be clustering a few physical servers and dozens of virtual machines. The built in libvirt and virt-manager GUI is more than sufficient and coupled with scripting - I have a very capable HA Virtualisation CLuster at a significantly earthly cost.
There of course other KVM "managers" out there that can do the job -- i.e. ProxMox but I digress.