VMware Cloud Community
ElGarufo
Enthusiast
Enthusiast
Jump to solution

Cant expand VMFS datastore (after extending the LUN)

Hello people,

I have a big problem. I extended my LUN from my array from 300 to 500 GB. Then, my ESX saw it perfectly, as 500 GB (vmhba1:2:8).

I tried to expand the vmfs datastore but it is display the follow error:

Error during the configuration of the host: Failed to update disk partition information

Any suggest?....

0 Kudos
1 Solution

Accepted Solutions
a_p_
Leadership
Leadership
Jump to solution

From reading the KB article Billiam posted, it could be a locking issue, which could probably be worked around with shutting down all VM's located in this datastore. However - as I mentioned before - this is not a supported configuration and I wouldn't waat to use this in a production environment. I know it's a lot of work and downtime in ESX 3.5, but my recommendation is to create a new LUN and migrate the VM's.

André

View solution in original post

0 Kudos
7 Replies
a_p_
Leadership
Leadership
Jump to solution

Even though you can add extends to an existing datastore, it is not supported to expand a LUN and add another partition of this LUN as an extend.

If you need to add extends (in ESX 4 you will be able to grow the datastore) in ESX, the supported way is to add another LUN as an extend.

I think what happend in your case is that the datastore has been extended this way before (3 times to be precise) or you are on local disks where other partitions alredy exist. Each new extend on this LUN is created as a primary partition and you can only have 4 primary partitions on one disk/LUN (this is not ESX specific, it's the default for disk partitioning).

If this is the case the 200GB are space that you cannot use anymore. I suggest - assuming you have enough free disk space - that you create a new 500 GB LUN, add this as a new datastore and migrate the VMs.

André

0 Kudos
ElGarufo
Enthusiast
Enthusiast
Jump to solution

Ok,

I didnt try the article you said, because i understood that is only to create new partitions, not to extend one yet created.

The vmfs datastore was extended before yes, but only one time more (ot was created, and then extended one time...)....

any idea?

thanks in advanced guys

0 Kudos
a_p_
Leadership
Leadership
Jump to solution

Can you please run "fdisk -l" (as root user) to list/examine the current partition layout and post the result.

André

0 Kudos
ElGarufo
Enthusiast
Enthusiast
Jump to solution

Ok, here are the real data (It was exactly firstable a LUN of 300, then plus 50, total 350 GB, and the last extend to 550 GB, the one who doesnt extend):

# fdisk -l

Disk /dev/sda: 107.3 GB, 107374182400 bytes

255 heads, 63 sectors/track, 13054 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System

/dev/sda1 1 13054 104856191 fb Unknown

Disk /dev/sdb: 64.4 GB, 64424509440 bytes

255 heads, 63 sectors/track, 7832 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System

/dev/sdb1 1 7832 62910476 fb Unknown

Disk /dev/sdc: 161.0 GB, 161061273600 bytes

255 heads, 63 sectors/track, 19581 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System

/dev/sdc1 1 19581 157284318+ 7 HPFS/NTFS

Disk /dev/sdd: 590.5 GB, 590558003200 bytes

255 heads, 63 sectors/track, 71797 cylinders

*Units = cylinders of 16065 * 512 = 8225280 bytes*

Device Boot Start End Blocks Id System

/dev/sdd1 1 45689 366996828+ fb Unknown

/dev/sdd2 45690 52216 52428092 fb Unknown

Disk /dev/cciss/c0d0: 73.3 GB, 73372631040 bytes

255 heads, 63 sectors/track, 8920 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System

/dev/cciss/c0d0p1 * 1 13 104391 83 Linux

/dev/cciss/c0d0p2 14 1925 15358140 83 Linux

/dev/cciss/c0d0p3 1926 1990 522112+ 82 Linux swap

/dev/cciss/c0d0p4 1991 8920 55665225 f Win95 Ext'd (LBA)

/dev/cciss/c0d0p5 1991 2627 5116671 83 Linux

/dev/cciss/c0d0p6 2628 2640 104391 fc Unknown

/dev/cciss/c0d0p7 2641 8920 50444012 fb Unknown

Any suggests?

0 Kudos
a_p_
Leadership
Leadership
Jump to solution

From reading the KB article Billiam posted, it could be a locking issue, which could probably be worked around with shutting down all VM's located in this datastore. However - as I mentioned before - this is not a supported configuration and I wouldn't waat to use this in a production environment. I know it's a lot of work and downtime in ESX 3.5, but my recommendation is to create a new LUN and migrate the VM's.

André

0 Kudos
Billiam
Contributor
Contributor
Jump to solution

I found this topic which seems to be same thing you are trying to do...

http://communities.vmware.com/thread/153480

If possible you could just get a new 500GB LUN and storage migrate the VM(s) to this new LUN and just destroy the other LUN w/extents.