We are moving to a new DC soon and wondered if this could be an opportunity to SAN boot our ESX farms. We are HP blade, FCAL based HP SAN environment. I'm up to speed with the technical configuration of getting a host to SAN boot but really wonder if it is a "best practise" and is it worth the additional effort that maybe required by our Operational team - keeping HBA, SAN switch firmware updated etc.
Appreciate your feedback and any comments/experiences you may of had...
To me booting ESX from the SAN seems to be more trouble than it is worth. I can see if you had only one ESX server there would be concern that local storage could fail and even though the VM's were on the SAN you would loose your virtual environment. In most cases however you have at least two ESX hosts in a cluster and the failure of one of the hosts local storage would not bring the VM's down, assuming HA and DRS were in place. As you said in your post managing HBA firmware and the difficulties of configuring boot from SAN can be a lot of work and personally I'm not sure the benefits are worth it.
As an alternative to boot from SAN for your ESX hosts I would recommend using two or more ESXi embedded hosts. With ESX embedded on the local system there is no need for local storage and in the case of host failure you would either reinstall or replace the host entirely.
That's my $0.02.
I absolutely boot all my ESX hosts from SAN...in 2009, this is certainly best practice. Years ago, maybe not, but today, absolutely.
To be fair, some SANs are (far) easier than others to manage boot-from-SAN, but there are several that are very simple. In addition, boot-from-SAN enables vMotion to be much more effective. If you don't use vMotion, I can see where boot-from-local is OK, since the VMs will never move.
I am the opposite, I do not use SAN boot. Actually I use local storage as part of my backup to keep a copy of my most important VMs just in case my SAN crashes. If that happens I can recover quickly that is unless there is not enough local storage. Always consider the DR case, SANs do not always stay running, etc.
Edward L. Haletky
VMware Communities User Moderator
Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education.
Blue Gears and SearchVMware Pro Blogs -- Top Virtualization Security Links -- Virtualization Security Round Table Podcast
Just to add another 2 cents to the pot....
The most common place I've seen boot from SAN implemented is a scenario where DR is planned and built for a smoking-hole type scenario. All hosts boot from SAN, all data resides on the SAN, and the datacenter storage is replicated between sites at the storage array level. Cold, identical standby hosts are placed in the remote site. If a failure occurs in the primary site, a precise replica is a power-on away at the remote site, as the boot LUNs are replicated as well as the data LUNs (this also means that networks and IP spaces need to be identical, and plenty of network-level failover and routing needs to be set up as well). Depending on distance, the remote site is typically not more than a minute behind the primary site (it's possible to sync most systems within an I/O or three with the right setup if they're close enough).
Blades are also more common boot from SAN targets, especially if they are diskless. But they add additional complexity in booting from SAN due to the typical FC switch module in the blade chassis - that's just one more piece of the puzzle on which to maintain a configuration, especially if you have a FC fabric external to the blades.
My personal opinion; if you have disks in your blades, boot locally, but make sure you store all your VMs on the SAN.
As to whether it is worth the additional effort, you need to weigh out your requirements and the pros and cons of the solution.
Does boot from SAN introduce any single points of failure in your configuration?
What is the operation level of effort in maintaining the configuration?
What is the additional redundancy offered by the storage array worth to you and your environment?
What do you see boot from SAN offering to you that booting from local disk does not?
Will you be repurposing the local storage for anything, or will that be an idle storage asset?
Have you identified if there is a cost with idle storage assets such as unused local disk?
What cost (level of effort and monetary) does the added complexity of boot from SAN incur?
Does any additional cost provide value to your IT organization or the business as a whole?
and so on....
Hope that helps some,
We've booted our ESX hosts from SAN since 2004 and we've never had an issue with it. (We boot every non-virtual server from SAN)
We use 6 big servers (not blades) and I believe it's not very much additional effort required to from from SAN.
I personally trust the SAN way more than local storage.
Hi, thanks to all for your feedback and sharing your experiences.
i will be making a pros/cons list and letting our Operational teams make the call, from here I can look at the effort involved in changing our hardware menu card and automated host build etc.
bobross In addition, boot-from-SAN enables vMotion to be much more effective.
Bob, can you explain how BFS enable VMotion to be more effective?
Also, if anyone has a good white paper or tech doc on BFS configuration I'd be interested in reading.