Using EFI/UEFI firmware in a VMware Virtual Machine

Version 2
Visibility: Open to anyone

    This document provides introductory information on the VMware virtual platform's support for EFI (or UEFI) firmware inside a virtual machine.  It may help you to decide whether or not to use EFI firmware when preparing to create a new virtual machine.

     

    What is EFI (or UEFI) firmware?

    EFI (Extensible Firmware Interface) is a specification for a new generation of system firmware. An implementation of EFI, stored in ROM or Flash RAM, provides the first instructions used by the CPU to initialize hardware and pass control to an operating system or bootloader. It is intended as an extensible successor to the PC BIOS, which has been extended and enhanced in a relatively unstructured way since its introduction. The EFI specification is portable, and implementations may be capable of running on platforms other than PCs.

     

    Originally called Extensible Firmware Interface (EFI), the more recent specification is known as Unified Extensible Firmware Interface (UEFI), and the two names are used interchangeably.  (I tend to use the name "EFI" for anything that was produced or defined before the name change, and "UEFI" for anything since the name change.)

     

    For more information, see  the Wikipedia page for Unified Extensible Firmware Interface and the website for the UEFI forum.

     

    Which VMware products support virtual EFI firmware?

    The following VMware products officially support running virtual machines with virtual EFI firmware:

    • Fusion 3.0 and newer, only when running OS X (Mac OS X) guests or Windows EFI Boot Camp installations.
    • ESXi 5.0 and newer.
    • Workstation 11.0 and newer.
    • Player 7.0 and newer.

    Other products or versions may contain the ability to run EFI firmware, without it being a tested or officially supported configuration.

     

    Do I need a host with EFI firmware?

    No.  The host's firmware is totally independent of the virtual machine's firmware, so BIOS hosts can run EFI virtual machines, and EFI hosts can run BIOS virtual machines.  (Note that virtual machines using physical disks, including Fusion's Boot Camp virtual machines, must use the firmware type that corresponds to the installed OS and the disk's partition scheme, which typically means that it must match the host's firmware.)

     

    Why would I consider using virtual EFI firmware?

    • It's the way of the future.  Practically all physical system boards shipped since 2010 are UEFI, and most major OS releases now support [U]EFI to some degree.
    • Boot from >2TB volumes. EFI supports GPT (GUID Partition Table), enabling boot from disks (and partitions) greater than 2 TBytes.
    • UEFI supports a much more versatile pre-boot environment. For an example, look at Full Disk Encryption (FileVault) in a recent Mac OS (10.7+) virtual machine: A graphical pre-OS authentication screen is presented by the OS bootloader, with animations and mouse support.
    • UEFI knows about filesystems. Read-write support for the FAT/FAT32 filesystems means that a bootloader can be reconfigured from within the UEFI environment – You can edit grub.conf using a built-in text editor even if the OS is unbootable!
    • Boot from USB.  Our legacy BIOS implementation has never supported USB boot (due to long and complicated reasons). Our virtual EFI firmware does.

     

    Why would I consider avoiding virtual EFI firmware and sticking with BIOS?

    • Higher memory consumption. When booting an EFI-aware operating system through EFI instead of BIOS, it's not unusual for a few extra megabytes of RAM to be reserved for firmware use. Virtual machines with very low memory sizes might not meet the memory needs of the EFI firmware itself!  We chose 96 MBytes as a reasonable lower limit which our EFI firmware will require.  BIOS, of course, would boot very comfortably in a 4 MByte virtual machine, which is the smallest allowed configuration for our virtual platform!
    • Less mature firmware implementation.  Let's be honest: The VMware EFI implementation still has some rough edges, particularly regarding the user interface for the firmware itself (i.e. the BIOS setup). We're working on it, but we're not there yet. We've focused heavily on making things as robust as we can, not pretty.
    • Less mature guest OS implementation.  A virtual machine is susceptible to defects in the guest OS that might render the guest unbootable or difficult to configure on EFI. OS vendors are not testing EFI boot as extensively as they are testing legacy boot, and OS implementations tend to initially be compatible with the systems owned by the authors of the OS's EFI support.  There are plenty of guest OSes which claim EFI support but have catastrophic defects in their EFI support which might cause a failure to boot the installer, a failure to complete the OS installation, a failure to boot into the installed OS, or failures at OS runtime.
    • New complicating factor: EFI supports two architectures, 32-bit IA32 and 64-bit X64. Our virtual EFI can run in either architecture, as controlled by the choice of guest OS. X64 is much more commonly supported by OSes, and is what most physical systems have today. The EFI architecture must match the architecture of the OS bootloader, which generally matches the architecture of the OS itself.
    • Less mature industry support as a whole.  Does your PXE server know how to handle EFI clients? Does your disk imaging tool know how to handle the GUID Partition Table format that EFI uses on-disk? Does your 3rd-party whole-disk encryption tool support EFI, or is it BIOS-only?
    • New way of handling boot ordering.  EFI uses a list of bootable OSes, stored in NVRAM (nonvolatile memory), to control its boot process. The capabilities and management techniques differ from BIOS virtual machines. Most importantly, your virtual machine's NVRAM becomes a critical part of your virtual machine, particularly for Linux virtual machines.  If your workflow isn't ready for that, it's a risk.

     

    Which guest OSes will install on EFI firmware?

    • Windows: 64-bit EFI firmware supported with Vista (SP1 and newer), Windows Server 2008 and newer. 32-bit EFI support added to Windows 8 and newer.
    • Linux: Depending on the distribution, architecture, version and source medium. See below.
    • OS X: Since Mac OS X Server 10.5, subject to the usual constraints on host hardware and virtualization product: Fusion or ESXi on Apple hardware.  OS X requires EFI firmware.
    • Solaris: Since Oracle Solaris 11.1.
    • ESXi: Since vSphere 5.0.
    • FreeBSD: Not yet, although they're working on it.  FreeBSD 10 includes an EFI bootloader, but it contains an compatibility defect which causes it to fail on our platform.

     

    What about Linux?

    It's difficult to tell.  The Linux kernel has supported EFI for a long time, but it's up to each distribution to enable it and provide all the rest of the needed support.

     

    Let's start by going through some of the more common distributions:

    • Redhat Enterprise Linux 6.0 and newer include EFI support in their 64 bit (x86_64) builds only.
    • Ubuntu 10.10 and newer include EFI support in their 64 bit (amd64) builds only.
    • Oracle Linux 6.0 and newer include EFI support in their 64 bit (x86_64) builds only.
    • Fedora 11 and newer have included EFI support in both the 32 bit (i386) and 64 bit (x86_64) builds, but:
      • the Fedora 12 x86_64 bootloader has a catastrophic defect and does not work on our platform; and
      • prior to Fedora 15, the EFI bootloader was only on the netinst media; and
      • Fedora 15 and newer only support 64 bit EFI – they dropped support for EFI from their 32 bit (i386) builds.
    • CentOS 6.3 and newer include EFI support.  Earlier releases in the 6.x series include defective EFI support.
    • SuSE Linux Enterprise Server 11 SP1 was the first SuSE release to include functional EFI support, only in their 64 bit (x86_64) media.  The Desktop version followed in SLED 11 SP2, only in the 64 bit (x86_64) media (SLED 11 SP1 tried, but it was defective).
    • OpenSuSE 12.1 and newer include EFI support in their 64 bit (x86_64) builds only.  OpenSuSE 11.3 and newer included defective EFI support.

     

    As you can see, it's all over the place.  This list is definitely not exhaustive, so distributions not on this list might still support EFI... The best way to find out is to try it and see.  (If you want an exhaustive list, feel free to volunteer to try them all. )

     

    Some of the older implementations in the above list are a bit rough around the edges and might behave in unexpected ways.

     

    What does a Linux distribution need in order to work with EFI?

    Successful installation of Linux guests under EFI generally depends on the distributor providing certain minimum requirements for an EFI-aware OS:

    • Media which contains an El Torito bootable image;
    • A well-formed EFI bootloader of the appropriate architecture (IA32 or X64) at the correct location inside that El Torito image, along with any support files it needs.
    • A kernel which is built with EFI support (CONFIG_EFI = y)
    • An installer with an awareness of EFI: It partitions the disk with GPT and creates an EFI System Partition, installs an EFI bootloader, and configures the boot order in EFI NVRAM (nonvolatile memory) to contain the OS bootloader.

     

    Availability of these components is somewhat unpredictable, and they are sometimes delivered with defects which affect their ability to operate correctly.

     

    Failure modes might include guest hangs or crashes before or during installation, failures to find or launch the installer, failures of the installer itself, failure to find the installed OS at the reboot at the end of installation, and – rarely – runtime issues or crashes in the installed OS.

     

    What about legacy OSes which are not EFI aware?

    You will need to configure the virtual machine to use BIOS.

     

    Many physical EFI systems include a Compatibility Support Module (CSM) to enable them to load operating systems which are not EFI-aware.  The CSM provides a BIOS-compatible interface that runs in conjunction with EFI, making the system behave just like with regular BIOS and allowing older OSes to run normally.  A platform with a CSM is often referred to as a UEFI Class 2 platform.

     

    The use of CSM is generally discouraged, and they are expected to be phased out from physical EFI systems in the near future.

     

    The VMware virtual platform does not support the use of a Compatibility Support Module, which means that OSes that are not EFI-aware may only be booted by configuring the virtual machine to use BIOS.  When a virtual machine is configured to use virtual EFI firmware, it is a "pure UEFI" platform lacking a CSM, often referred to as a UEFI Class 3 platform.  (There is the need for a small amount of compatibility code to boot earlier versions of Windows atop our virtual EFI firmware, however it is not a full Compatibility Support Module.)

     

    A consequence of this is that you cannot use virtual EFI firmware to perform USB boot of legacy guests.

     

    What other requirements and considerations are there for virtual machines with EFI firmware?

    Required virtual hardware:

    • At least 96 MBytes of RAM.
    • At least hardware version 7.  Hardware version 8 or newer is preferred due to an enhanced virtual nonvolatile RAM (NVRAM) device which is more robust against guest failures.

    Some virtual hardware is unsupported by EFI itself:

    • vmxnet2 virtual NIC: Will be ignored by EFI firmware.  EFI PXE boot through vmxnet2 is not possible, however an EFI-booted guest may still use a vmxnet2 controller at runtime.
    • Parallel port: Will be initialized and then ignored by EFI firmware, however an EFI-booted guest OS may still use a parallel port at runtime.
    • Sound devices: Our EFI implementation does not have the capability to use any sound device, however an EFI-booted guest OS may still use a supported sound device at runtime.

    All other virtual hardware (e.g. storage controllers, network controllers, USB1/2/3 controllers, display adapters, processors & memory) may be used within EFI.

     

    Verify that interoperating products/solutions are compatible with EFI.  Of note for vSphere users: VMware Fault Tolerance (FT) is not presently compatible with EFI firmware.

     

    How do I start using EFI firmware?

    You must make this choice before installing the OS.

     

    On VMware Workstation, go into VM > Settings > Options > Advanced, and check Boot with EFI instead of BIOS.

    WS enabling EFI.png

     

    On VMware Fusion, EFI firmware is automatically selected for Mac OS guests.  You do not need to do anything.

     

    On ESXi using the vSphere Web Client, go into Edit Settings > VM Options > Boot Options, and choose under the Firmware section.

    vSphere Web Client enabling EFI.png

     

    On ESXi using the vSphere Client, go into Edit Settings > Options > Boot Options, and choose under the Firmware section.

    vSphere enabling EFI.png

     

    On any of our products with EFI support, you can also manually edit the virtual machine's configuration file to add the line

     

       firmware = "efi"

     

    to configure a virtual machine for EFI.  The above user-interfaces do exactly that for you.  You can use this method if you want to play around with EFI in configurations that we don't officially support (such as Linux on EFI in Fusion 7), but of course things might very well break.

     

    What happens if I switch between virtual BIOS and EFI firmware after the OS is installed?

    Don't do that.

     

    During installation, the guest OS must decide whether to prepare the virtual disk with a partition scheme and bootloader suitable for BIOS or for EFI.  Changing the firmware between BIOS and EFI after installation will generally render the virtual machine unbootable.  Change the firmware back to its earlier configuration in order to restore the ability to boot the OS.

     

    You may need to temporarily change between BIOS and EFI if you want to use a partitioning tool, rescue disk, or similar tool which requires firmware different from the installed OS.  If you do this, again, change the firmware back to its earlier configuration when you're done, so as to restore the ability to boot the installed OS.

     

    What happens if I change the guest OS type after the OS is installed?

    Be careful if you do that.  The VMware virtual platform chooses the EFI architecture (IA32 or X64) according to the guest OS type, and the OS will generally work with and install only one bootloader architecture – either IA32 or X64 – according to the firmware architecture in use during installation.  If you attempt to boot an OS using the incorrect firmware architecture, it will almost certainly fail to boot.  If you installed Ubuntu 64-bit with the guest OS type accidentally set to "Other Linux 3.x kernel 64-bit", it won't be a problem to change it to "Ubuntu 64-bit".  If you change it to "Ubuntu" (32 bit), the OS will be unbootable until you switch it back to a 64 bit guest OS type.  If you tried to install Ubuntu 64-bit with the guest OS type set to "Ubuntu" or "Other Linux 3.x kernel" (both 32 bit), the OS installer would have failed to boot in the first place... and you can switch the guest OS type to "Ubuntu 64-bit" and then boot the virtual machine into the installer.

     

    Does the VMware virtual platform support UEFI Secure Boot?

    UEFI Secure Boot is supported since vSphere 6.5 (for both the ESXi physical hosts and Virtual Machines).

     

    <name of favorite OS> works with <some other EFI implementation>, why not in a VMware Virtual Machine?

    The UEFI specification allows quite a bit of latitude for vendors to extend the firmware, as well as to implement (or omit) certain optional parts of the UEFI specification.  Sometimes, OS vendors end up unintentionally depending on characteristics of their development and test systems that are not part of the UEFI specification, thus limiting the compatibility of their OS.

     

    The most common case of this we've seen has been OS vendors placing the EFI bootloader for the installation medium onto an ISO9660 or UDF filesystem instead of an El Torito partition.  The UEFI specification does not require the ability to read an ISO9660 or UDF filesystem, although some hardware and virtual platform vendors include drivers for those filesystems anyway.  Any OSes which depend on the presence of an ISO9660 or UDF driver will be severely restricting the platforms on which they can run, and such OSes will not run in a VMware virtual machine.

     

    The next most common case we've seen has been OS vendors accidentally depending on legacy BIOS (or CSM) interfaces when booted on EFI firmware.  Such OSes will generally install successfully on some systems with a CSM or even on systems with an EFI emulation layer atop BIOS, but will fail on UEFI Class 3 platforms with no compatibility layer such as the VMware virtual EFI firmware.

     

    It could also simply be a defect in either the VMware virtual EFI firmware or a platform-specific defect in the OS itself... there's no shortage of ways in which to fail.  If in doubt, start a new discussion thread in the appropriate forum for your product or file a support request, as appropriate.

     

    <name of favorite OS> works with BIOS firmware, but not with EFI.  Why?

    It might be that the OS does not support EFI, that its EFI support is defective, or it could be that there is a defect in the VMware virtual EFI firmware.  Some of the discussion above might help you figure out which is the case.

     

    If in doubt, start a new discussion thread in the appropriate forum for your product or file a support request, as appropriate.