VMware Cloud Community
BlueBolt
Contributor
Contributor

ESXi5 and HP RAID controllers

Hi

We've just set up our first ESXi instance on an HP dl360G7 with 4x HDDs configured as a single LD in RAID10.  Installed ESXi and built 2 datastores (1, 2) of approx 35/65 ratio.  Not being familiar with the ways of ESXi, i was unaware that ESXi was actually partitioning the array at controller level, and created 2 logical drives, which explains (in retrospect) the presence of 2 HP SAS controllers in the Devices section.

Yesterday, i became convinced that the datastore1/2 proportions were imbalanced, so i deleted (empty) datastore2 and Increased the size of datastore1 by 200GB, believing i could use the free space to create a new datastore2.  All of this done in vSphere Client, btw. However, i was unable to create a new datastore using the free space and it appears that ESXi extended datastore1 to only a portion of second controller's LD:

HP serial Attached SCSI Disk (naa...) 315.12GB

HP serial Attached SCSI Disk (naa...) 200.00GB

hpacucli confirms i have 2x logical drives.  This is not what t expected would happen and it's bugging me no end.  Is this the intended behaviour when ESXi creates datstores on HWRAID?  I expected it would have created 1 LV per datastore on a single LD.  Oops.

I'll shut down the server and have a look at the array via HP's Offline Array Utility and perhaps claw back the lost/free space as a 3rd LD, but i'd rather not put this server into production with a compromise configuration.

So, can i get back to my original 315GB datastore1 without losing the VMs on it?  If so, can i expand datastore1 (basically grow LD1 while avoiding what happened in the last "experiment"?

much thanks

- c sawyer

0 Kudos
3 Replies
BlueBolt
Contributor
Contributor

OK, after checking in the HP Offline Array util, the original LDs created by ESXi for the 2 original datastores are still there with no free space.  However, ESXi is camping only 200GB on a 520GB LD to extend datastore1 (originally 320GB).  Can a datastore that crosses extents (is that the right term?) be reduced/rescinded?

0 Kudos
ngarjuna
Enthusiast
Enthusiast

Make sure the host meets the minimum hardware configurations supported by ESXi 5.0.

To install and use ESXi 5.0, your hardware and system resources must meet the following requirements:

Supported server platform. For a list of supported platforms, see the VMware Compatibility Guide athttp://www.vmware.com/resources/compatibility.

ESXi 5.0 will install and run only on servers with 64-bit x86 CPUs.

ESXi 5.0 requires a host machine with at least two cores.

ESXi 5.0 supports only LAHF and SAHF CPU instructions.

ESXi supports a broad range of x64 multicore processors. For a complete list of supported processors, see the VMware compatibility guide at http://www.vmware.com/resources/compatibility.

ESXi requires a minimum of 2GB of physical RAM. VMware recommends 8GB of RAM to take full advantage of ESXi features and run virtual machines in typical production environments.

To support 64-bit virtual machines, support for hardware virtualization (Intel VT-x or AMD RVI) must be enabled on x64 CPUs.

One or more Gigabit or 10Gb Ethernet controllers. For a list of supported network adapter models, see the VMware Compatibility Guide at http://www.vmware.com/resources/compatibility.

Any combination of one or more of the following controllers:

Basic SCSI controllers. Adaptec Ultra-160 or Ultra-320, LSI Logic Fusion-MPT, or most NCR/Symbios SCSI.

RAID controllers. Dell PERC (Adaptec RAID or LSI MegaRAID), HP Smart Array RAID, or IBM (Adaptec) ServeRAID controllers.

SCSI disk or a local, non-network, RAID LUN with unpartitioned space for the virtual machines.

For Serial ATA (SATA), a disk connected through supported SAS controllers or supported on-board SATA controllers. SATA disks will be considered remote, not local. These disks will not be used as a scratch partition by default because they are seen as remote.

Note

You cannot connect a SATA CD-ROM device to a virtual machine on an ESXi 5.0 host. To use the SATA CD-ROM device, you must use IDE emulation mode.

ESXi 5.0 supports installing on and booting from the following storage systems:

SATA disk drives. SATA disk drives connected behind supported SAS controllers or supported on-board SATA controllers.

Supported SAS controllers include:

LSI1068E (LSISAS3442E)

LSI1068 (SAS 5)

IBM ServeRAID 8K SAS controller

Smart Array P400/256 controller

Dell PERC 5.0.1 controller

Supported on-board SATA include:

Intel ICH9

NVIDIA MCP55

ServerWorks HT1000

Note

ESXi does not support using local, internal SATA drives on the host server to create VMFS datastores that are shared across multiple ESXi hosts.

Serial Attached SCSI (SAS) disk drives. Supported for installing ESXi 5.0 and for storing virtual machines on VMFS partitions.

Dedicated SAN disk on Fibre Channel or iSCSI

USB devices. Supported for installing ESXi 5.0. For a list of supported USB devices, see the VMware Compatibility Guide at http://www.vmware.com/resources/compatibility.

vSphere 5.0 supports booting ESXi hosts from the Unified Extensible Firmware Interface (UEFI). With UEFI you can boot systems from hard drives, CD-ROM drives, or USB media. Network booting or provisioning with VMware Auto Deploy requires the legacy BIOS firmware and is not available with UEFI.

ESXi can boot from a disk larger than 2TB provided that the system firmware and the firmware on any add-in card that you are using support it. See the vendor documentation.

Note

Changing the boot type from legacy BIOS to UEFI after you install ESXi 5.0 might cause the host to fail to boot. In this case, the host displays an error message similar to: Not a VMware boot bank. Changing the host boot type between legacy BIOS and UEFI is not supported after you install ESXi 5.0.

Installing ESXi 5.0 requires a boot device that is a minimum of 1GB in size. When booting from a local disk or SAN/iSCSI LUN, a 5.2GB disk is required to allow for the creation of the VMFS volume and a 4GB scratch partition on the boot device. If a smaller disk or LUN is used, the installer will attempt to allocate a scratch region on a separate local disk. If a local disk cannot be found the scratch partition, /scratch, will be located on the ESXi host ramdisk, linked to/tmp/scratch. You can reconfigure /scratch to use a separate disk or LUN. For best performance and memory optimization, VMware recommends that you do not leave /scratch on the ESXi host ramdisk.

To reconfigure /scratch, see Set the Scratch Partition from the vSphere Client.

Due to the I/O sensitivity of USB and SD devices the installer does not create a scratch partition on these devices. As such, there is no tangible benefit to using large USB/SD devices as ESXi uses only the first 1GB. When installing on USB or SD devices, the installer attempts to allocate a scratch region on an available local disk or datastore. If no local disk or datastore is found, /scratch is placed on the ramdisk. You should reconfigure /scratch to use a persistent datastore following the installation.

In Auto Deploy installations, the installer attempts to allocate a scratch region on an available local disk or datastore. If no local disk or datastore is found /scratch is placed on ramdisk. You should reconfigure /scratch to use a persistent datastore following the installation.

For environments that boot from a SAN or use Auto Deploy, it is not necessary to allocate a separate LUN for each ESXi host. You can co-locate the scratch regions for many ESXi hosts onto a single LUN. The number of hosts assigned to any single LUN should be weighed against the LUN size and the I/O behavior of the virtual machines.

0 Kudos
BlueBolt
Contributor
Contributor

Yes, but HP dl360G7 and its RAID hardware and disks satisfy all of those requirements.  The issue is not with the spec of the system - it's how ESXi manages datastores.

0 Kudos