How-to increase datastore after expanding RAID array (ESXi4)

I was advised to create a new post and fully explain my problem including hardware and software versions etc (see:

A HP DL380G5 was set up with RAID 5 (3 disks of 146G) with a total capacity of 410GBytes

ESXi 4.0 was installed on the blank partition using total capacity for the datastore. After installing 4 virtual machines, it was clear that the space was insufficient. 3 more disks where added and the RAID 5 was extended to total capacity of 820GBytes.

When I tryed to increase the datastore, I followed the instruction on the "ESXi Configuration Guide - ESXi 4.0", namelly the section "Increase VMFS Datastore, where it says the following:

"When you need to create new virtual machines on a datastore, or when the virtual machines running on this

datastore require more space, you can dynamically increase the capacity of a VMFS datastore.

Use one of the following methods:

- Add a new extent.

- Grow an extent in an existing VMFS datastore. "

As it turns out I was unable to do either, although ESXi seems to recognise the new capacity of the array (see attached images).

Tags (3)
0 Kudos
3 Replies

My understanding is that a new LUN should have been created and presented, and then added as an extent. This could be done either by creating a new array on the new disks, or by expanding the existing RAID-5 volume to include the new disks, but without[/b] performing online capacity expansion on the existing LUN.

I think unfortunately now you are a bit stuck, unless someone knows otherwise.

Please award points to any useful answer.

0 Kudos

Yes, now that I have added disks to my array I won't be able to take

them away to create a new array. Also I don't want to format the disk and start again.

I did find a solution for the ESX version (),

but I have ESXi 4.0 Installable and in this case I don't know what to do.

I have access to the ESXi through ssh, but I don't know if the instructions on this article are applicable. For instance, even the output of fdisk -l is different:

  1. fdisk -l

Disk /dev/disks/mpx.vmhba1:C0:T0:L0: 880.6 GB, 880691363840 bytes

64 heads, 32 sectors/track, 839892 cylinders

Units = cylinders of 2048 * 512 = 1048576 bytes

Device Boot Start End Blocks Id System

/dev/disks/mpx.vmhba1:C0:T0:L0p1 5 900 917504 5 Extended

/dev/disks/mpx.vmhba1:C0:T0:L0p2 901 4995 4193280 6 FAT16

/dev/disks/mpx.vmhba1:C0:T0:L0p3 4996 419943 424906752 fb VMFS

/dev/disks/mpx.vmhba1:C0:T0:L0p4 * 1 4 4080 4 FAT16 <32M

/dev/disks/mpx.vmhba1:C0:T0:L0p5 5 254 255984 6 FAT16

/dev/disks/mpx.vmhba1:C0:T0:L0p6 255 504 255984 6 FAT16

/dev/disks/mpx.vmhba1:C0:T0:L0p7 505 614 112624 fc VMKcore

/dev/disks/mpx.vmhba1:C0:T0:L0p8 615 900 292848 6 FAT16

Partition table entries are not in disk order

0 Kudos

delete the FAT16 partition then set up new partition and format them with fb(vmfs) filesystem.

you can not see new datastore in you VI,if thepartitions are not in VMFS format.

0 Kudos