Expert
Expert

Why shouldn't I architect new ESXi installs to leverage internal SD/USB cards?

With this being the last supported version of ESX I am looking at coming up with a standard BP for ESXi HW. I am intrigued with the idea of installing ESXi on supported SD/USB media inside of HP blades. This would obviate the need for Smart Array cards and more importantly local disks.

Can anyone help me shoot holes in this concept?

Thanks,

-MattG

If you find this information useful, please award points for "correct" or "helpful".

-MattG If you find this information useful, please award points for "correct" or "helpful".
0 Kudos
11 Replies

Some might object to the SD/USB card being a single point of failure, but I'm sure mirrored SD will become a pretty standard thing. Dell already offers this - http://www.dell.com/downloads/global/products/pedge/en/poweredge-idsdm-whitepaper-en.pdf.

With ESXi 4.1 you also have the option to boot from SAN so that might be another option for you. If you use Embedded the supported recovery method would be to use a recovery CD and then do your post install configuration (probably with a script). With Installable you can PXE boot the installer and script then entire process for faster recovery.

As an aside, if I recall correctly you had mentioned in the past that the lack of vCLI parity with the serivce console was a significant factor preventing you from migrating to ESXi. With the 4.1 release is that still an issue?




Dave

VMware Communities User Moderator

Now available - vSphere Quick Start Guide

Do you have a system or PCI card working with VMDirectPath? Submit your specs to the Unofficial VMDirectPath HCL.

0 Kudos
Expert
Expert

Is the SD/USB a legitimate concern as a single point of failure concern that I should architect around?

Since SD/USB has no moving parts, shouldn't it be considered more reliable than a HD?

I am not interested in booting from SAN as it has it's own limitations and requires a dedicated LUN per host.

As for the vCLI parity comment, with ESX 4.1 being the last release of ESX, and the addition of the VMware APIs (vStorage), there are few 3rd party reasons not to make the leap.

-MattG

-MattG If you find this information useful, please award points for "correct" or "helpful".
0 Kudos
Expert
Expert

I already did this since August 2008 in HP Servers with USB sticks. The first USB sticks (green one) were not certified.

So since Oktober 2008 (were are reaching the two years) the second generation (1 GB) usb sticks are running.

It is an HP DL 585 G2 with 50 -80 VM's.

Till now we have got the fifth generation of HP usb sticks with is 2Gb big .

Now I switch to dell R815 and R910 because the have dual redudant sd cards. In this test one sd card 1 and the sd card box 1

went broken and the esxi did survive. Cool.

I found only negative side for the dual card. For restoring , you have to put the broken sd card 1 out of sd card box 1 and put sd card 2 from sd box 2 in sd box 1 and the new sd card in box 2. Otherwise you have a blank card. Maybe I did something wrong here, but I don't think.

Allthough a good thing of sd card is, that the admins cannot use anymore easily the local storage , because it is not shown in the VC/ESXi datastore view!

Third thing , you don't need a dhcp /pxe infrastructure for deployment. Buy the server give it an ipdress and a host profile or script and you are finished

Expert
Expert

The only other vendor I found is Cisco , which has alghou a dual sd card for the big rack server.

I would go for blades , as I saw the new IBM X5 series which has an extra 32 dimms box.

From my experience the host are every time running out of memory.

How can Blades today or in future provide so much memory. Bigger motherboards or very small dimms ?

0 Kudos
Expert
Expert

Would you consider this something that you would do as a Best Practice or more along the lines of based on the customers requirements.

I know HP had some issue with USB and ESXi in the past, but would you consider a single SD card as a reliable alternative to 2 x HD in Raid 1?

-Matt

-MattG If you find this information useful, please award points for "correct" or "helpful".
0 Kudos
Expert
Expert

How much memory do you really need in a 2 socket server?

If I am running 96GB (8GB x 12 slots=faster memory speed), how many VMs do you plan on running per server. Even if I was to have 96 VMs on this host that would be roughly 1GB guaranteed per VM and 96 VMs is alot of VMs per host. The nature of VM usage patterns is that most VMs don't use all of their granted memory, so you can overcommit without even worrying about VMware having to baloon, compress, or swap.

-MattG

If you find this information useful, please award points for "correct" or "helpful".

-MattG If you find this information useful, please award points for "correct" or "helpful".
0 Kudos
Expert
Expert

Yes I would do. Why? Because, esxi boots form the sd card/usb-stick , then runs in ram and every one hour writes onto the usb stick. So Ram is much more critical than the hard disk or a sd card.

Maybe in Future esxi runs completly in ram.

Allthough a good articel:

http://www.techhead.co.uk/why-run-vmware-esxi-from-a-memory-stick-or-sd-card

And really: HP , IBM didn't here the customer. The idea from the dual reduant sd card I got from a camera. So I sent to every server oem the dual sd card camera implementation in early 2009. Only Dell got it. Bad for the other.

At the moment I will only by Dell server. Maybe HP, IBM will have this in 2011 ?

0 Kudos
Expert
Expert

Maybe follwoing example: Two you have 2003 in your show. Now you switch to a 2008. This menas every VM needs more ram.

Or in my case, full datacenter, it is cheaper to replace an existing one year old server (even 96 GB), with a 4 socket server.

Only the rack cost, power, cooling, operation rack cost, cabeling and cisco switch are so expensive over 3 years, that a new with much ram fits best for us. Best case is Dell R910 with 512 GB. And after vmware supports the 20 mca reliable feature of nehalem ex (next year westmere ex),

we put more than 300 VM'S on one box. The two socket cpu's will not have this features in 2010 and 2011, intel told us.

0 Kudos
Expert
Expert

0 Kudos

This is a bit old - http://communities.vmware.com/docs/DOC-7512 - but it provides an example in which the hosts would boot from a common image and then be automatically configured to join vCenter.

0 Kudos
Expert
Expert

At the moment it is not supported. SO we have to wait till esxi can be loaded over network and runs directly in ram.

So for me the at the moment the easiest mthedoe is a dell server R910, R815 and R810 with dual sd card. Nothing to deploy.

If in the new version 5.X only network boot to ram is available, than this can be the next choice for headquarter datacenters.

0 Kudos