Just ran into this article
http://searchservervirtualization.techtarget.com/originalContent/0,289142,sid94_gci1260992,00.html
This sounds pretty smart, getting the virtualization layer even thinner and closer to the hardware......
anyone that can confirm this rumor ?
Emmmmm not so sure Massimo, Ive seen it come from a DELL Reps mouth in a presentation in front of 40+ CIO's, imagine the multiples of resource below them that will have this information given to them.
Im going to wait until VMworld and see whether they have a demo on show, seeing is beleiving As a lot of companies seem to talk the talk but never walk it!
Dan
PS So glad ive had my 50KB of Fame on a news site
Dan
Well Dan this assumes that the Dell sales rep know what a BIOS is and how it works. Perhaps you are giving too much credit to the category...
I have also heard someone mentioning as having "ESX on the motherboard" (as opposed to the BIOS). This is a very loos statement since, technically, you can even have a standard SAS hard drive "on the motherboard". This very same article for example speculate that you could see "An embedded version residing in memory" ......... Memory? Which memory? A disk could be "memory" ...... etc etc etc
In my opinion people are leveraging the situation to find the most effective/shocking "statements" ......... After all stating that a server has "the hypervisor in the firmware" is very cool ..... isn't it? It looks like those very high-end eneterpise proprietary boxes .......
Reality is that we are not quite there yet. Quite frankly I don't see how one would be able to put an hypervisor (specifically of the nature of ESX which is a relatively big beast even if you streap down it removing the Service Console etc etc) into a BIOS or a firmware. It has to be more of an on-board "disk" (whether it will be standard SAS/SCSI, solid state etc etc ... it's a detail).
I guess we will know more down the road.
Massimo.
Yep, I dont disagree with that. Im neutral (dont work for a vendor ) and open to what will emerge from this.
Techinically how can you even future proof and make a solution like this cost effective with things like new releases and patching adding to the size of complete ESX footprint.
Trial until proven guilty for me
physical upgrade, nice little earner
Dan,
just so we are on the same page ....while I do work for a vendor I am trying to keep my discussions neutral. The fact that this is a Dell oriented discussion is a detail to me ..... I am more focused on the technology discussion / feasability for the matter ..... that is what I am interested in ....
Massimo.
I agree with Massimo. My guess (and it's just a guess!) is that you'll see something like open-e's[/url] "disk on module" that allows you to have an "ESX Appliance". This "puts ESX on the motherboard" and, if you want to stretch a little, you could consider this to be "ESX in non-volatile storage"
To be honest, there's nothing preventing someone from doing this today, with the current version of VI-3, other than the fact that there's still too much state information stored locally on an ESX box (but there are ways to work around that )
Exactly Ken.
My only point was that, from a marketing perspective, this would literally be a "ESX on BIOS (or motherboard)". However we geeks know that this is not architecturally[/b] different than having the state-of-the-art ESX installed on a couple of traditional 2 x raid1 drives ......
Well I am very downplaying it here (which is not my real intent) .... there are things going on that could make this deployment method more attractive/secure/easy etc etc etc ...... compared to a standard ESX install ....... but at the same time one can't state that this is like turning the ESX we know today into a "firmware" type of thing (like on UNIX/mainframe boxes) ...... because it's not.
Massimo.
Conceptually, this is similar to the way that Virtual Iron handles their host machines: PXE boot to a central image. They're just chucking the boot image into a chip on the board.
Definitely an attractive concept for IT shops wanting to virtualize, but not wanting to worry about training people on Linux to manage the SC. Like it or not, VMware's largest penetration seems to be with Windows shops. My concern would be around security -- if it is an appliance, are there default passwords and accounts? What does that look like from a security and compliance perspective?
Having stateless hosts is pretty attractive, and VMware seems to have been going towards more of an appliance model with ESX.
NB: I also work for a vendor and would be interested in seeing a VM blade-appliance from HP. Mmmm... DR...
>if it is an appliance, are there default passwords and accounts? What
>does that look like from a security and compliance perspective?
even if there will be a default id/pwd account you will just be able to change it right after deployment. I don't see this as being a big issue.
Actually cutting the CoS from the picture can only improve security as you are getting rid of the most unsecure portion of the whole stack.
Massimo.
Agreed. Reduction of the attack service is a good thing.
However, it seems that the VMkernel would need to be directly accessible rather than having the CoS effectively proxy the connection.
I don't know how many people maintain user accounts within the CoS. I would imagine that the appliance would simply rely on VC for authentication beyond the root-equivalent. Integration of THAT into the VMkernel would seem like a giant step in the wrong direction.
>However, it seems that the VMkernel would need to be directly accessible
>rather than having the CoS effectively proxy the connection.
This is an implementation detail and clearly getting rid of the COS is not just an "rm" of some files ....
As per maintaining a user database ..... clearly, as you have mentioned, that would be a step in the wrong direction ..... so I doubt that they will make it ....
This is already a bad practice today .... and in the future it will be even more so if we move towards stateless building blocks.
Massimo.
if we are talking esx lite or esx in a bios. i would like to think in an appliance type of way.
the largest appliance makers use cf cards for their osses (netapp) and put a os on the hdd's in the system.
the nicest idea would be to have a esx hypervisor on a cf card and the rest on a lun on a san. that most of us use any how because of vmotion.
ít could be a black box with a lot of memory and fc or iscsi for san connectivity and enough cpu's.
the box could be small but easy to maintain.
After another meeting with Dell it looks like this now:
Alternative 1:
\- 2u size
\- 2 sockets for processors
\- 12 slots for RAM, max 128 Mb RAM (a bit vague from the Dell rep)
\- 4 integrated Gb Nics
\- 4 slots for expansion cards
\- 2 slots for harddrives
\- optional embedded Esx hypervisor (on a solid state disk),
no need for local harddrives if solid state disk is used and the
server is connected to some kind of shared storage for storing
virtual machines
Alternative 2:
\- 3u or 4u size, 4u size more likely (this was a bit vague from the Dell rep)
\- 4 sockets for processors
\- 32 slots for RAM, max 256 Mb RAM
\- 4 integrated Gb Nics
\- 4 slots for expansion cards (should be more if 4u size? again a bit vague from the Dell rep)
\- 2 slots for harddrives
\- optional embedded Esx hypervisor (on a solid state disk),
no need for local harddrives if solid state disk is used and the
server is connected to some kind of shared storage for storing
virtual machines
The "embedded Esx hypervisor" is nothing special, it's Esx Server 3.x,
there's no Esx lite according to this Dell rep and also according to a
Vmware rep we meet at the same time.
/Fedde
Alternative 1:
- 12 slots for RAM, max 128 Mb RAM (a bit vague from
the Dell rep)
Sorry for my typo, it was of course 16 slots for RAM, max 128 Mb RAM.
/Fedde
This is interesting (i.e. funny).
Originally this was picthed as "ESX in the BIOS" ... now it has become a "standard ESX 3.x pre-install"....
My suggestion is .... don't count on these fuds .... these are people that have limited understanding about the technical details. Those that really know have their mouths closed.
My 0.0000002 cents.
Massimo.
Sorry for my typo, it was of course 16 slots for RAM,
max 128 Mb RAM.
128 GB, surely?
Sorry for my typo, it was of course 16 slots for
RAM,
max 128 Mb RAM.
128 GB, surely?
Yes, of course 128 Gb RAM.
Sorry for my typo.
/Fedde
If it is 128 GB then it must be 16 Dimm slots.
The greates GB Modules will be 8 GB.
So 128 GB / 8GB = 16 DImm Slots.
And 16 Dimm slots / 2 CPU = 8 Dimm SLots per cpu socket.
out of curiosity .... how much would these 8GB dimm modules cost in real life?
The risk here is that of buying a low cost apartment (yet with a big mortgage) and decide to furnish it with golden teapots ..........
It's easy to say that with 256GB memory dimms you can get to 256*16= 4TBytes of memory ...... but you have to pay for that ....
Massimo.
Haha Mass I am of the same opinion! Do they even manufacture 8GB slots yet!?