VMware Cloud Community
SuperSpike
Contributor
Contributor

vSphere 5 Licensing

I took a minute to read the licensing guide for vSphere 5 and I'm still trying to pull my jaw off the floor. VMware has completely screwed their customers this time. Why?

What I used to be able to do with 2 CPU licenses now takes 4. Incredible.

Today

BL460c G7 with 2 sockets and 192G of memory = 2 vSphere Enterprise Plus licenses
DL585 G7 with 4 sockets and 256G of memory = 4 vSphere Enterprise Plus licenses

Tomorrow

BL460c G7 with 2 sockets and 192G of memory = 4 vSphere Enterprise Plus licenses
BL585 G7 with 4 sockets and 256G of memory = 6 vSphere Enterprise Plus licenses


So it's almost as if VMware is putting a penalty on density and encouraging users to buy hardware with more sockets rather than less.

I get that the vRAM entitlements are for what you use, not necessarily what you have, but who buys memory and doesn't use it?

Forget the hoopla about a VM with 1 TB of memory. Who in their right mind would deploy that using the new license model? It would take 22 licenses to accommodate! You could go out and buy the physical box for way less than that today, from any hardware vendor.

Anyone else completely shocked by this move?

@Virtual_EZ
0 Kudos
1,980 Replies
kmcferrin
Enthusiast
Enthusiast

wdroush1 wrote:

The APIs that Citrix and Microsoft use are open?

Tisk tisk VMWare.Smiley Sad

As far as I know.  SCVMM 2012 can manage Xen, and Citrix can manage Hyper-V.  I don't know the specifics of how Xen's management works, but on the Hyper-V side everything is pretty much exposed via common Windows APIs and manipulated with PowerShell. IF VMware wanted to they could add support for managing multiple hypervisors to their management platform relatively easily, but I get the impression that their stance is "our solution is the best technically, so why would you use anyone else's?"  Which is a fair question from them, because they don't want to make it easy for people for people to test drive less expensive competitors.  Coexistence is something that they accept only grudgingly.

0 Kudos
rbtwindude
Contributor
Contributor

@kmcferrin

Hotlink.com has done this.

Sent from my Verizon Wireless BlackBerry

0 Kudos
wdroush1
Hot Shot
Hot Shot

rbtwindude wrote:

@kmcferrin

Hotlink.com has done this.

Sent from my Verizon Wireless BlackBerry

Heh, kept assuming the URL in your posts was just your signature and ignored it thinking that the cross-hypervisor compatability was a feature of vCenter 5 by itself.

An interesting idea, considering how nice vCenter is I can definately see it being used to back cross-hypervisor environments... will it do XenMotion to vMotion? Now that would be interesting!

I'm also interested in how you deal with missing features, you show off KVM but I know a KVM machine can't be nearly as fine-grain configured as a vSphere VM.

0 Kudos
aroudnev
Contributor
Contributor

Cross vendor VC management will NEVER work well. It is not more then sales trick; it is just impossible to replicate it well enough. All it can do is to simplify simple tasks in the very big multi vendor data centers; any complicated managemnt will require native VC.

So, I never trust to any claims that our center can manage VMWare or vice versa - this is 100% impossible. yes. MS center can do SOME VC tasks, maybe - but never can replace full scale VC center or VC console.

0 Kudos
wdroush1
Hot Shot
Hot Shot

aroudnev wrote:

Cross vendor VC management will NEVER work well. It is not more then sales trick; it is just impossible to replicate it well enough. All it can do is to simplify simple tasks in the very big multi vendor data centers; any complicated managemnt will require native VC.

So, I never trust to any claims that our center can manage VMWare or vice versa - this is 100% impossible. yes. MS center can do SOME VC tasks, maybe - but never can replace full scale VC center or VC console.

Other way around, this is basically vCenter running a plugin that can manage other hypervisors, I'm assuming this is fairly easy because XenServer/Hyper-V specs are pretty easily available?

But yeah, you're still going to have your hands tied by the limitations of the hypervisor, I was just curious how you'd even begin to handle that at the software level (never played around with VMWare's SDKs much so... just curious mostly).

But honestly I could almost see a XenMotion -> vMotion plugin too, similar to how Veeam can stream data that isn't natively understandable by vSphere (transposing backups in real time to be able to run them off the backup format).

0 Kudos
kmcferrin
Enthusiast
Enthusiast

wdroush1 wrote:

But honestly I could almost see a XenMotion -> vMotion plugin too, similar to how Veeam can stream data that isn't natively understandable by vSphere (transposing backups in real time to be able to run them off the backup format).

I can't see that happening, at least not in any sort of a timely fashion.  It would be more like a live V2V conversion.  With vMotion you're (at a very simplistic level) making a copy of the VM's memory, switching the ownership of the VMDK and config files, then doing at ARP to pass the network connections over.  To move between platforms like that you would need to move the disk files between storage (or have both platforms able to see the same LUNs/NFS shares), AND do the memory copy.  Then you have to rip out the integration components/VM Tools and replace them with the correct version, which could necessitate driver updates/replacements depending on how virtualized devices present themselves, etc.  It really would be like a P2V at best.

0 Kudos
wdroush1
Hot Shot
Hot Shot

kmcferrin wrote:

wdroush1 wrote:

But honestly I could almost see a XenMotion -> vMotion plugin too, similar to how Veeam can stream data that isn't natively understandable by vSphere (transposing backups in real time to be able to run them off the backup format).

I can't see that happening, at least not in any sort of a timely fashion.  It would be more like a live V2V conversion.  With vMotion you're (at a very simplistic level) making a copy of the VM's memory, switching the ownership of the VMDK and config files, then doing at ARP to pass the network connections over.  To move between platforms like that you would need to move the disk files between storage (or have both platforms able to see the same LUNs/NFS shares), AND do the memory copy.  Then you have to rip out the integration components/VM Tools and replace them with the correct version, which could necessitate driver updates/replacements depending on how virtualized devices present themselves, etc.  It really would be like a P2V at best.

Yeah, you're right the tools and drivers are really the killer part (the whole storage migration/stream wouldn't be a huge deal, Veeam does it).

0 Kudos
rbtwindude
Contributor
Contributor

Platespin has a pretty simple process that could give hope.

P2x and V2x. Now. Smiley Happy

Sent from my Verizon Wireless BlackBerry

0 Kudos
AureusStone
Expert
Expert

VMware is working on this and does have a solution in VMware labs.

http://labs.vmware.com/flings/xvp

They are a bit behind the competition in this one small area, but they are miles ahead in a lot of other areas.

0 Kudos
Dracolith
Enthusiast
Enthusiast

kmcferrin wrote:


IF VMware wanted to they could add support for managing multiple hypervisors to their management platform relatively easily, ....

Maybe they DO  want to?

http://labs.vmware.com/flings/xvp

"VMware vCenter XVP Manager....   basic virtualization management  capabilities for non-vSphere hypervisor  platforms towards enabling  centralized ..

*Familiar vCenter Server graphical user interface for navigating through and managing non-vSphere inventory

"

However, something tells me, if  VMware ever makes a product out of it,    will  still be a separate product  w/ additional cost above and beyond vCenter Standard, and eventually get  vRAM  license restrictions  for   non-VMware hypervisors too,  or  have to buy "vCenter Agent for  third party hypervisor"   per-CPU  licenses  at some large percentage of the cost of a vSphere CPU license.

It's the management capabilities they're really providing for vSphere,  that makes VMware so attractive, anyways....

0 Kudos
tomaddox
Enthusiast
Enthusiast

http://www.theregister.co.uk/2011/08/17/redhat_rhev_3_beta/

"Companies decide to standardize their Linuxes on RHEL, then they  virtualize their workloads using either the integrated KVM or RHEV.  Then, they look at the cost of vSphere from VMware and decide to try a  few Windows workloads on RHEV. Thadani says that prior to VMware's  vSphere 5.0 launch and its memory tax, RHEV cost about one-seventh as  much per host to virtualize x64 machines with the same number of VMs.  But in the wake of the virtual memory tax, even after VMware's  rejiggering, RHEV now costs one-fifteenth to one-twentieth of vSphere  5.0 to virtualize a big, fat server."

One-fifteenth to one-twentieth. That's certainly enough to get one's attention. VMware, you out there?

0 Kudos
wdroush1
Hot Shot
Hot Shot

tomaddox wrote:

http://www.theregister.co.uk/2011/08/17/redhat_rhev_3_beta/

"Companies decide to standardize their Linuxes on RHEL, then they  virtualize their workloads using either the integrated KVM or RHEV.  Then, they look at the cost of vSphere from VMware and decide to try a  few Windows workloads on RHEV. Thadani says that prior to VMware's  vSphere 5.0 launch and its memory tax, RHEV cost about one-seventh as  much per host to virtualize x64 machines with the same number of VMs.  But in the wake of the virtual memory tax, even after VMware's  rejiggering, RHEV now costs one-fifteenth to one-twentieth of vSphere  5.0 to virtualize a big, fat server."

One-fifteenth to one-twentieth. That's certainly enough to get one's attention. VMware, you out there?

These guys were throwing tons of eval licenses my way, it is extremely cheap to go with them, they've also caught up a lot in the little time that RHEV has existed as a package backed by RHEL.  I'm very interested in how far RHEV will go in a few years.

Starter:

6 Sockets, no RAM limitation, $3,000 - $4,500 (std/pre).

Standard purchasing:

4 Sockets, no RAM limitation, $4,000 - $6000. (std/pre)

Their management system (web based) comes with either package. On top of that SPICE looks amazing (virtual desktops). Also they're migrating their entire management system away from Windows in a few months, so it's like vCenter 5, except it actually works when it's not on a Windows box.

Edit: reading the article, they've already done the above... yay!

2TB of memory (host or VM), 128-core hosts, 64-core VMs, some pretty good stuff for large datacenters.

0 Kudos
aroudnev
Contributor
Contributor

The only big problem with RHEV etc is that file systems are not clustered , so nothing can replace always-clustered VMFS (so that we allocate 2 TB disks on the SAN / iSCSI and then use these disks from any of xxx VMWare servers). I dont think that GFS can replace it, but who knows...

I am intrigued and will look, what RHEV really is.

0 Kudos
wdroush1
Hot Shot
Hot Shot

aroudnev wrote:

The only big problem with RHEV etc is that file systems  are not clustered , so nothing can replace always-clustered VMFS (so that we allocate 2 TB disks on the SAN / iSCSI and then use these disks from any of xxx VMWare servers). I dont think that GFS can replace it, but who knows...

I am intrigued and will look, what RHEV really is.

RHEV doesn't have file systems, it uses raw LVMs, it helps improve scalability and removes some overhead (unless they've changed this recently).

http://billbauman.com/blog/category/virtualization-2/rhev/

It however adds a level of complexity to trying to manage it with a browser or something (you can't), probably makes recovery of disks impossible too.

0 Kudos
ChipE201110141
Contributor
Contributor

Eckardt, Chip is currently Away.

0 Kudos
aroudnev
Contributor
Contributor

Hmm, LVM is cluster aware. But what do they do for the smal files (such as configuration file)?

(I need to look, help for info).

0 Kudos
Dracolith
Enthusiast
Enthusiast

aroudnev wrote:

The only big problem with RHEV etc is that file systems  are not clustered , so nothing can replace always-clustered VMFS (so that we allocate 2 TB disks on the SAN / iSCSI and then use these disks from any of xxx VMWare servers). I dont think that GFS can replace it, but who knows...

So use a NFS based SAN.

Or if shared SCSI from the SAN is the only option,   Clustered LVM  or    VxFS?

Don't get me wrong...  I love VMFS.

But it's possible to do without VMFS on alternate platforms and not lose any shared access functionality that is actually a requirement.

0 Kudos
AureusStone
Expert
Expert

That Bill Bauman article is factually wrong.  It is obvious he is a big RHEV advocate, but he obviously does not understand VMware architecture well enough to be educating.

I read the entire article and then looked at the date.  I was surprised to see it was written this year.  A lot of his criticism about VMFS performance is based around ESX 3.x.  With all of the improvements in locking and technology such as VAAI the performance difference between RDMs and VMFS is basically nothing.  VMFS is a really good technology and is a great asset to VMware, to suggest otherwise without proper evidence seems silly.

The limit on vSphere cluster sizes has nothing to do with VMFS, it is more about HA limitations.  VMFS3 supports being shared with up to 64 hosts.  There are plenty of reasons for not using 32 node clusters, it really depends on the environment.  8 node clusters used to be popular for virtual desktop environments, due to the limitation on how many guests you could put on a host in a cluster over 8 nodes.

Basically his article is the equivalent of saying MacOS is better then Windows because by Tiger desktop is faster and more reliable then my Windows 98 desktop.

0 Kudos
wdroush1
Hot Shot
Hot Shot

Kyle Hanson wrote:

That Bill Bauman article is factually wrong.  It is obvious he is a big RHEV advocate, but he obviously does not understand VMware architecture well enough to be educating.

I read the entire article and then looked at the date.  I was surprised to see it was written this year.  A lot of his criticism about VMFS performance is based around ESX 3.x.  With all of the improvements in locking and technology such as VAAI the performance difference between RDMs and VMFS is basically nothing.  VMFS is a really good technology and is a great asset to VMware, to suggest otherwise without proper evidence seems silly.

The limit on vSphere cluster sizes has nothing to do with VMFS, it is more about HA limitations.  VMFS3 supports being shared with up to 64 hosts.  There are plenty of reasons for not using 32 node clusters, it really depends on the environment.  8 node clusters used to be popular for virtual desktop environments, due to the limitation on how many guests you could put on a host in a cluster over 8 nodes.

Basically his article is the equivalent of saying MacOS is better then Windows because by Tiger desktop is faster and more reliable then my Windows 98 desktop.

Didn't even read it, just wanted the LVM images and a source for the spec, don't really care to hear anyone talk about VMFS much. Smiley Sad Yeah most people are extremely baised, I'd say some good points were raised though. Kind of normal for the Linux guys to go into really technical details and leave out that yeah, that's still a decent sized node cluster and is fine, and a lot of RHEV's scale is cool, but untouchable by most.

Also, it's more like comparing Tiger to XP, oh wait they did. Smiley Wink ESX3 isn't that old. 4 was out in 2009.

0 Kudos
LucasAlbers
Expert
Expert

Point out some not so obvious caveats with RHEV.

http://www.vcritical.com/2010/06/these-are-not-the-files-you-are-looking-for/#comment-11027

If you are considing RHEV consider some of the limitations.

A computerworld survey about vsphere 5 licensing.

http://www.informationweek.com/news/hardware/virtual/231500090

89% primary hypervisor

93% user vmware hypervisor

survey from users:

(they have comments at the bottom for each graph, don't forget to notice those.)

http://www.informationweek.com/news/galleries/hardware/virtual/231500085

"Server memory configurations will shoot up over the next two years. The  fraction of servers with 128 to 256 GB of memory will double and those  with more than 256 GB will triple. The effect: Many of those who aren't  bumping up against VMware's vRAM licensing limits yet will do so in two  years."

"Only 7% of those respondents who know about the new licensing model say they like it, while six in ten say it's a deterrent. This data was collected before VMware announced changes to the original vRAM licensing policy."

0 Kudos