VMware Cloud Community
gallagauge
Contributor
Contributor

esxi 5 free hypervisor?

I saw that there is a new version of esxi (version 5).

Will there be a version 5 of the free VMware vSphere Hypervisor or is this product going to stay at version 4.1?

0 Kudos
147 Replies
spiralscratch
Contributor
Contributor

ITGeezer wrote:

I think they've missed a trick here.

I agree the free hypervisor with the 8GB limit is next to useless ... but I for one would be willing to pay for a 16GB or even 32GB limit ... but I wouldn't pay for Essentials at £430 + Tax in the UK because (a) that's a lot of money and (b) I don't need everything it includes.

I'd be quite happy however to pay something similar to what Workstation costs (£150 + Tax) for single server dual socket 16GB or 32GB with none of the extras - i.e. just the Hypervisor but with a higher memory limit.

I'd rather they ditch this insane new licensing scheme and go back to counting CPUs, but I might be willing to consider something like that. However, I think that those limits are still too low for the prices stated. Some OSes (e.g., Windows Server, FreeNAS) can simply eat RAM, and virtualizing them makes no sense if the ratio of virtual machines (or vCPUs) to cores is less that 1:1. Even for a basically unsupported "testing" product.

I feel that the bare minimum vRAM limit for the free product to be useful would probably be 16GB (though I'd rather see 32GB). This would be much closer to v4.1's limits IIRC. It also lines up fairly well with the capabilities of current low-end server hardware (e.g., a Xeon E3 system). As proposed, upgrades above that could be at some cost. Maybe ~$100 per 32GB? And if I'm paying for it, I'd better be able to get real support, not just these forums.

And if they're sticking to the vRAM model, adjust the limits of the other products upwards as well. The current levels are way too low for the price.

0 Kudos
rickardnobel
Champion
Champion

ITGeezer wrote:

I'd be quite happy however to pay something similar to what Workstation costs (£150 + Tax) for single server dual socket 16GB or 32GB with none of the extras - i.e. just the Hypervisor but with a higher memory limit.

I am running 16 GB of RAM on my ordinary PC with Windows 7 and VMware Workstation, this makes it more powerful than the vSphere "Hypervisor" 5.0. If I would like to have a dedicated host only running VMs at home I would look at free Microsoft Hyper-V that seems to fit the purpose.

As I know very little of marketing and similar, but from my perspective for people that will not (for any reason) want to buy a full licensed product, it would still be valuable to a company (VMware or Microsoft) that they choose their free version instead of the competitor.

My VMware blog: www.rickardnobel.se
0 Kudos
admin
Immortal
Immortal

The vSphere Hypervisor is entitled to 8GB of vRAM per proc and has a limit on physical RAM per server of 32GB. The product is meant for small first time deployments

0 Kudos
spiralscratch
Contributor
Contributor

Alberto wrote:

The vSphere Hypervisor is entitled to 8GB of vRAM per proc and has a limit on physical RAM per server of 32GB. The product is meant for small first time deployments

And what I and others are stating is that the present limits are wholly inadequate It is simply not possible to do a proper "small first time deployment" with only 8GB of vRAM. It can't even be used for a proper proof-of-concept depoyment.

Obviously, no one expects to be able to build up a server room using the free product.

0 Kudos
golddiggie
Champion
Champion

So if I decided to increase my current test lab host above 16GB of RAM (where it's at now) since it's only a dual socket system, I won't be able to actually USE that extra memory I've purchased?? Very poor form... Especially since I can increase the host RAM, under version 4.x, to whatever I want, or the hardware will accept. So VMware has decided to hamstring test labs, or VCP's labs, where we could test out deploying new servers with the current (v4.x) release with the future releases... NOT good...

Unless VMware intends to offer a less limited version/license to VCP's for use with v5 and beyond, they're not earning any points.

0 Kudos
rickardnobel
Champion
Champion

Alberto wrote:

The product is meant for small first time deployments

Can you please elaborate this? What kind of "deployments" has VMware in mind?

My VMware blog: www.rickardnobel.se
0 Kudos
GVD
Contributor
Contributor

Alberto wrote:

The vSphere Hypervisor is entitled to 8GB of vRAM per proc and has a limit on physical RAM per server of 32GB. The product is meant for small first time deployments

So 4 CPU + 32 GB vRAM can be done? How about running 1 CPU + 32 GB vRAM using the same principles that govern Standard/Advanced/Enterprise/EnterprisePlus licensing? Is it just bad wording and can you just tack on more Hypervisor licenses without having the CPU sockets populated to match?

Is it more like Essentials/EssentialsPlus, where you have a 144GB vRAM pool to allocate even if you don't have the pCPU to match? (in this case a vRAM pool of 32 GB to allocate on the single Hypervisor host, regardless of pCPU population)

Can we get an official reply on this?

0 Kudos
JurgenD
Contributor
Contributor

Where is it to download, because I am tired to test out 4.1, which lacks support for custom hardware.


I hope this time, without any excuse, there is a broad way to support all hardware.  It is nonsense, to say, the hypervisor need specific hardware.  Besides, it would be much better to switch your kernel base to the xxxBSD. Not only You can have a much better network throughput, You can make advance of the ZFS file system, which is currently the best option in town to store files.

0 Kudos
wdroush1
Hot Shot
Hot Shot

JurgenD wrote:

Where is it to download, because I am tired to test out 4.1, which lacks support for custom hardware.


I hope this time, without any excuse, there is a broad way to support all hardware.  It is nonsense, to say, the hypervisor need specific hardware.  Besides, it would be much better to switch your kernel base to the xxxBSD. Not only You can have a much better network throughput, You can make advance of the ZFS file system, which is currently the best option in town to store files.

5.x AFAIK isn't going to support more hardware. ESXi is probably the most restrictive hypervisor for hardware. XenServer is a lot better, with Hyper-V probably being the best out of the box(it's bloody Windows), XenServer may actually be better but I'm not familiar with the driver sets it comes with, but being Linux you should be able to plug anything into it.

As for rewriting vSphere's kernel: not going to happen, we have XenServer for that. However I wish they did allow for open development of drivers. Also I'd never trust local file systems in a production environment, and on a home environment ZFS is overkill (and can be achieved through a VSA anyway).

However, I've had really good luck with ESXi 4.x running on all kinds of garbage, but your mileage will vary.

0 Kudos
JurgenD
Contributor
Contributor

The ZFS Raid is not an overkill, especially when You do use to protect your data in a raid.  So when one virtual disk is corrupted, the ZFS raid can restore that virtual disk.  That's why I do use it. I don't trust a single virtual disk, if it gets corrupted, I co loose a lot of data, netiher iI do want to take all the time snapshots or backups of the virtual machine.  So the me, ZFS Raid is necessary in the guest, and to me, it would be perfect if it was included in ESXi on the host side. Because this way, I can protect the host as well. ZFS is great in this regard You can take snapshots of Your filesystem, and to backup, clone your filesystem in a quick, efficient way.  So it would be a great filesystem to run on the host side too. The snapshots of VMware could even be replaced as well, with the use of ZFS, this way, the footprint of ESXi would be even smaller as well. As ZFS does provide NFS functionality, which could be use to have shared folders, without to have a large foorprint. Another major issue, because even Microsoft is paying an University to have a NFS v3 driver for their OS.

I do known the VMware products from day they were born.  Actually I am amazed they did develop from a small company to a nice, big company. Maybe I should talk again with them to push them to a new frontier... I just need to find the one who I did talk too. But those guys are open to critics, comments and aprovements. I do belief in the product of VMware. If You can hold te market for several years, it does mean something. Of course some things need to be improved, but I am sure they will manage. And don't forget, with their products, they did a MAJOR push forward to have more efficient computers, servers and they did help to have a better, nice and green environment.  Well, in this regard, I do support them.

The product was one of the first hypervisors, and they did major evolution in different directions.  As I did suggested a lot in those days too. One of those things did form the base of VMware server etc.  I did suggest that in times, when the site did look like pretty Russian with a lot of servers on the frontpage. I did have some good debates with the developers, to transform VMware workstation to bare metal.  A pity, it looks like they did choose some linux os, it would be much better to start from the BSD kernel, which is still, much stronger as the linux kernels in many aspects, where the network stack if very good as well.  In this regard, there could be a native port for the BSD community too.  Now we need to stick with a linux version, which does not run as smooth as a native version on FreeBSD.  To me one of the best server OS is FreeBSD.  Reason why there are such great products such as pfsense, FreeNAS, maybe soon a native hypervisor, named BHyVe.

I did try Xenserver, but I was disappointed about the limitations.  More, I see, the VMware product van manage perfectly FreeBSD ZFS with 64 LBA in the host.  Xenserver as Virtualbox can not manage that at all for some reason.  VMware does ! My FreeBSD 64 bit is running with GPT formatted ZFS raid. Other hypervisors do deliver me an int13_harddisk: function 42. Can't use 64bits lba As there are to much limitations on the free version of Xenserver.

Besides, Xenserver is a Citrix product, they are standing too close to Microsoft. Reason why remote desktop products and hypervisors of Microsoft were actually at start, degraded versions of CItrix products.

No, I do use VMware for developement.  And I do want to run ESXI, because in this way, I can swap very easy, virtual machines from ESXi to VMware Workstation, so I can run the virtualized, development server on my laptop when I am in hotel, and move it quickly to the server when I am back from projects at home.

But I have a lot of questions related to the hardware comptability.  the ESXi should run on every server, every computer.  Without any excuse. They can provide warnings, that some drivers do slow down the hypervisor, but a low cost server, workstation should run ESXi. So I am looking forward to the ESxi 5, where there is the possibility to make your custom hypervisor, so You can load all drivers, which should be needed to run on a lowend computer.

0 Kudos
DSTAVERT
Immortal
Immortal

VMware 4.1 should be available for many years. Support should last for 7 years from initial release of the product although whether the free version is available that long is unknown.

-- David -- VMware Communities Moderator
0 Kudos
LucasAlbers
Expert
Expert

http://www.vmware.com/support/policies/lifecycle/enterprise-infrastructure/index.html

  • General Support – 5 years from general availability of a Major Release
    Support includes new hardware support, guest OS updates, bug and  security fixes, and technical helpdesk services. VMware will update the    Hardware Compatibility Guide with new hardware platforms that have been tested and certified.  Details are described in the table below.
  • Technical Guidance – 2 years following General Support
    Primary assistance is available through the    self help page.  Customers can also open a support request online to receive support and  workarounds for low-severity issues on supported configurations only.  (Telephone support is not provided.) There will be no new hardware  support, guest OS updates, security patches or bug fixes. The phase is  intended for usage by customers operating in stable environments with  systems that are operating under reasonably stable loads.

*       General Support for selected new hardware technology (such as servers,  processors, chipsets, and add-in cards) is based on VMware’s  discretion, OEM partner input, and customer input. An 18-month hardware  support window is started when a major or minor vSphere release is  generally available. New hardware technology launched within the  18-month window will be supported in a compatible mode by an update to a  vSphere major/minor release; hardware technology launched after the  18-month window will normally not be supported by that release.

So:

VMware ESXi 4 General      
Availability      
(YYYY/MM/DD)
End of Support      
(YYYY/MM/DD)
End of Technical Guidance      
(YYYY/MM/DD)
ESXi Version 42009/05/212014/05/212016/05/21

GA + 18 months for guaranteed hardware support.

At some point the hardware vendors will not put much effort to support, ergo Dell Openmanage for example.

0 Kudos
wdroush1
Hot Shot
Hot Shot

JurgenD wrote:

The ZFS Raid is not an overkill, especially when You do use to protect your data in a raid.  So when one virtual disk is corrupted, the ZFS raid can restore that virtual disk.  That's why I do use it. I don't trust a single virtual disk, if it gets corrupted, I co loose a lot of data, netiher iI do want to take all the time snapshots or backups of the virtual machine.  So the me, ZFS Raid is necessary in the guest, and to me, it would be perfect if it was included in ESXi on the host side. Because this way, I can protect the host as well. ZFS is great in this regard You can take snapshots of Your filesystem, and to backup, clone your filesystem in a quick, efficient way.  So it would be a great filesystem to run on the host side too. The snapshots of VMware could even be replaced as well, with the use of ZFS, this way, the footprint of ESXi would be even smaller as well. As ZFS does provide NFS functionality, which could be use to have shared folders, without to have a large foorprint. Another major issue, because even Microsoft is paying an University to have a NFS v3 driver for their OS.

I do known the VMware products from day they were born.  Actually I am amazed they did develop from a small company to a nice, big company. Maybe I should talk again with them to push them to a new frontier... I just need to find the one who I did talk too. But those guys are open to critics, comments and aprovements. I do belief in the product of VMware. If You can hold te market for several years, it does mean something. Of course some things need to be improved, but I am sure they will manage. And don't forget, with their products, they did a MAJOR push forward to have more efficient computers, servers and they did help to have a better, nice and green environment.  Well, in this regard, I do support them.

The product was one of the first hypervisors, and they did major evolution in different directions.  As I did suggested a lot in those days too. One of those things did form the base of VMware server etc.  I did suggest that in times, when the site did look like pretty Russian with a lot of servers on the frontpage. I did have some good debates with the developers, to transform VMware workstation to bare metal.  A pity, it looks like they did choose some linux os, it would be much better to start from the BSD kernel, which is still, much stronger as the linux kernels in many aspects, where the network stack if very good as well.  In this regard, there could be a native port for the BSD community too.  Now we need to stick with a linux version, which does not run as smooth as a native version on FreeBSD.  To me one of the best server OS is FreeBSD.  Reason why there are such great products such as pfsense, FreeNAS, maybe soon a native hypervisor, named BHyVe.

I did try Xenserver, but I was disappointed about the limitations.  More, I see, the VMware product van manage perfectly FreeBSD ZFS with 64 LBA in the host.  Xenserver as Virtualbox can not manage that at all for some reason.  VMware does ! My FreeBSD 64 bit is running with GPT formatted ZFS raid. Other hypervisors do deliver me an int13_harddisk: function 42. Can't use 64bits lba As there are to much limitations on the free version of Xenserver.

Besides, Xenserver is a Citrix product, they are standing too close to Microsoft. Reason why remote desktop products and hypervisors of Microsoft were actually at start, degraded versions of CItrix products.

No, I do use VMware for developement.  And I do want to run ESXI, because in this way, I can swap very easy, virtual machines from ESXi to VMware Workstation, so I can run the virtualized, development server on my laptop when I am in hotel, and move it quickly to the server when I am back from projects at home.

But I have a lot of questions related to the hardware comptability.  the ESXi should run on every server, every computer.  Without any excuse. They can provide warnings, that some drivers do slow down the hypervisor, but a low cost server, workstation should run ESXi. So I am looking forward to the ESxi 5, where there is the possibility to make your custom hypervisor, so You can load all drivers, which should be needed to run on a lowend computer.

ZFS is pretty much enterprise level data protection, the things it protects against aren't really going to show up in a home lab environment, and anything that a home environment needs can be protected with simple hardware RAIDs or any cheap VSA setup. I mean, I agree... I love ZFS, but to act like I'm being kicked in the balls for not having support on it for my ESXi free version is kinda silly.

Have you tried KVM in that case? If limitations are your problem, KVM will blow them away. Smiley Wink I'd complain that EMC buying VMWare is terrible too, not to mention... Paul Maritz? :smileysilly: Practically Microsoft here. I wouldn't be surprised to hear if the latest licensing blunder is to get more market share TO Microsoft.

Why not run a VSA if you really want ZFS support (cheap software RAID too at that)? I've heard the performance hit is negligible, especially on home labs.

How do you propose ESXi run on hardware that there are no drivers for? I mean you can put unsupported hardware in ESXi, it just may not see it. It can't pull magical drivers out of the aether to communicate with 3rd party hardware devices... and I get where you're coming from, I wish there was better hardware support, I mean there are brand new quad port gigabit NICS that ESXi wont detect without a lot of configuration hacking and at the end of the day having questionable stability.


"I am looking forward to the ESxi 5, where there is the possibility to make your custom hypervisor, so You can load all drivers, which should be needed to run on a lowend computer."

Where have you read this?

0 Kudos
GVD
Contributor
Contributor

I've just had feedback from Dell (acting as our VMWare representative):

The Dell VMWare licensing "experts" said that it is 8 GB vRAM is "per socket" to a maximum of 4 populated sockets. Which means:

  • 1 pCPU & less or equal 8 Gb vRAM = OK
  • 2 pCPU & less or equal 16 Gb vRAM = OK
  • 3 pCPU & less or equal 24 Gb vRAM = OK
  • 4 pCPU & less or equal 32 Gb vRAM = OK
  • 1 CPU & 9 Gb vRAM = NOT OK!

Which  quite frankly is pretty retarded, since it encourages you to buy more  cheap CPUs rather than one expensive one, rather than saving money in  power consumption which is a part of the benefit of virtualization.

I'm not one for conspiracy theories, but Intel must be smiling for this particular change.

Personally,  our standalone DEV & QA servers are 2 pCPU and 24-36 GB pRAM.  Meaning we now can provision less vRAM than we have memory. Needless to  say, we won't be upgrading to Hypervisor 5.0...

0 Kudos
mauirixxx
Contributor
Contributor

GVD wrote:

I've just had feedback from Dell (acting as our VMWare representative):

The Dell VMWare licensing "experts" said that it is 8 GB vRAM is "per socket" to a maximum of 4 populated sockets. Which means:

  • 1 pCPU & less or equal 8 Gb vRAM = OK
  • 2 pCPU & less or equal 16 Gb vRAM = OK
  • 3 pCPU & less or equal 24 Gb vRAM = OK
  • 4 pCPU & less or equal 32 Gb vRAM = OK
  • 1 CPU & 9 Gb vRAM = NOT OK!

Which  quite frankly is pretty retarded, since it encourages you to buy more  cheap CPUs rather than one expensive one, rather than saving money in  power consumption which is a part of the benefit of virtualization.

I'm not one for conspiracy theories, but Intel must be smiling for this particular change.

Personally,  our standalone DEV & QA servers are 2 pCPU and 24-36 GB pRAM.  Meaning we now can provision less vRAM than we have memory. Needless to  say, we won't be upgrading to Hypervisor 5.0...

So basically 1 pCPU w/ 4 cores is still only good for 8GB vRAM? Or does  that take into consideration the amount of cores involved as well?

0 Kudos
rickardnobel
Champion
Champion

mauirixxx wrote:

So basically 1 pCPU w/ 4 cores is still only good for 8GB vRAM? Or does  that take into consideration the amount of cores involved as well?

From the information available it seems like you could have a host with 1 CPU with 16 cores, but a total of 8 GB of vRAM.

My VMware blog: www.rickardnobel.se
0 Kudos
mauirixxx
Contributor
Contributor

ah .... all I can say now is ... bummer :smileysilly:

0 Kudos
timarbour
Contributor
Contributor

I'm going to stay on 4.1 as long as I can and then move to XenServer.  I have 3 servers with dual 6-core processors and 128GB RAM each running Essentials Plus.   First off, I chose Vmware for it's economic reasons as it WAS priced reasonably.  Pricing is just ridiculous now - I payed $1400 for Essentials Plus 2 years ago and now it's $5700+.   In these economic times, you can't jack pricing up that much.   I'm going from a payed customer (revenue for Vmware) to somewhere else.  Now, after all that, Vwmare has just priced itself out of the market and with the version 5 licensing changes, and with my hardware, it's become almost useless for me.

0 Kudos
rcstevensonaz
Contributor
Contributor

It would be nice if the limits at least took into consideration tri-channel motherboards and not just dual-channel.

I'm just playing with this for home use and have an single CPU motherboard (i7 930) and the 6 memory banks populated with 2Gb modules.  This puts my very humble machine over the 8Gb limit.  So on a tri-channel board, you can either only populate one-half of the DIMM slots or you need to find 1Gb sticks instead of 2Gb sticks.

Seems like they should go wtih at at least 12Gb to allow 2Gb sticks on dual (i.e., 4x2Gb) and tri-channel (i.e., 6x2Gb) motherboards.

0 Kudos
mrudloff
Enthusiast
Enthusiast

Seems all clear to me (not that I like it)

Now, bottom of the FAQs

http://www.vmware.com/products/vsphere-hypervisor/faq.html

"How much vRAM does a VMware vSphere Hypervisor license provide?

A vSphere Hypervisor license includes a vRAM entitlement of 8GB."

Now .. vRAM entitlements

http://www.vmware.com/files/pdf/vsphere_pricing.pdf

" Depending on the edition, each

vSphere 5.0-CPU license provides a certain vRAM capacity

entitlement. When the virtual machine is powered on, the vRAM

configured for that virtual machine counts against the total vRAM

entitled to the user."

0 Kudos