VMware Cloud Community
jacotec
Enthusiast
Enthusiast
Jump to solution

Issue after resizing datastore from GUI

Hi,

I'm trying to solve the same problem and running into the same issue. Unfortunately I don't find a solution here and I want to ask if there has been news about this particular problem.

One of my servers in my homelab was slowly running out of physical space and I have extended my virtual disk by 4TB which went fine after my H700 controller was busy the last 4 days. As the next step I've extended the Datastore via the GUI, but that did only half of the job.

I've ended up with my partition fully resized from 5,6TB to 9.09TB.

esxi1.jpg

But - same issue as the TO - the shown capacity is still the old value of 5.45 TB:

esxi2.jpg

The "vmkfstools --growfs" command is just broken, I'm getting the same stupid error as the TO. The partition 3 of naa.xxxx shows at the right size:

[root@VMServer2:/dev/disks] ls -l

total 19540937520

-rw-------    1 root     root     7850688512 Dec 21 17:00 mpx.vmhba32:C0:T0:L0

-rw-------    1 root     root       4161536 Dec 21 17:00 mpx.vmhba32:C0:T0:L0:1

-rw-------    1 root     root     262127616 Dec 21 17:00 mpx.vmhba32:C0:T0:L0:5

-rw-------    1 root     root     262127616 Dec 21 17:00 mpx.vmhba32:C0:T0:L0:6

-rw-------    1 root     root     115326976 Dec 21 17:00 mpx.vmhba32:C0:T0:L0:7

-rw-------    1 root     root     299876352 Dec 21 17:00 mpx.vmhba32:C0:T0:L0:8

-rw-------    1 root     root     2684354560 Dec 21 17:00 mpx.vmhba32:C0:T0:L0:9

-rw-------    1 root     root     9999220736000 Dec 21 17:00 naa.6782bcb0349bdb00ff00005f05e6a18e

-rw-------    1 root     root       4161536 Dec 21 17:00 naa.6782bcb0349bdb00ff00005f05e6a18e:1

-rw-------    1 root     root     4293918720 Dec 21 17:00 naa.6782bcb0349bdb00ff00005f05e6a18e:2

-rw-------    1 root     root     9991298727424 Dec 21 17:00 naa.6782bcb0349bdb00ff00005f05e6a18e:3

-rw-------    1 root     root     262127616 Dec 21 17:00 naa.6782bcb0349bdb00ff00005f05e6a18e:5

-rw-------    1 root     root     262127616 Dec 21 17:00 naa.6782bcb0349bdb00ff00005f05e6a18e:6

-rw-------    1 root     root     115326976 Dec 21 17:00 naa.6782bcb0349bdb00ff00005f05e6a18e:7

-rw-------    1 root     root     299876352 Dec 21 17:00 naa.6782bcb0349bdb00ff00005f05e6a18e:8

-rw-------    1 root     root     2684354560 Dec 21 17:00 naa.6782bcb0349bdb00ff00005f05e6a18e:9

But the "growfs" command fails:

[root@VMServer2:/dev/disks] vmkfstools --growfs "naa.6782bcb0349bdb00ff00005f05e6a18e:3" "naa.6782bcb0349bdb00ff00005f05e6a18e:3"

Not found

Error: No such file or directory

I'm pretty sure to use the right syntax for the "vmkfstools" command - what am I doing wrong? And how can I get the capacity of my datastore fixed, so that it uses the additional 3.6TB which is already in the partition?

1 Solution

Accepted Solutions
jacotec
Enthusiast
Enthusiast
Jump to solution

Finally I made it! It's a bunch of work unfortunately.

I've read somewhere in the internet that one solved the same issue by resigning the datastore. As I had absolutely no chance to unmount the datastore in my ESXi (even after unregistering all VM's, reconfiguring scratch and log dir away from it - it always said "file system busy") I took the hard path.

I've inserted my backup SD card into the host and did a full new ESXi install. Then I've imported the Datastore with resigning it. After I've done this, the vmkfstools --growfs worked fine on the partition and it showed me the new capacity of 9.09TB.

Finally I've reconfigured the complete host, vswitches, network setup (I did not dare to reimport my old config backup as I've feared my datastore connection will screw up then) and so on, then I've reregistered my 21 VM's, recreated autostart configuration and finally refreshed all the VM's in Veeam.

All that took me three hours, but now it's running with the full datastore size.

So all guys with the same issue should take the path and resign the datastore with all the pain coming with that.

Marco

BTW: Having the ESXi installation sitting on it's own drive definitely helped here. I can recommend that to all to boot ESXi from an internal SD card or USB stick.

View solution in original post

11 Replies
a_p_
Leadership
Leadership
Jump to solution

Did you try to run the command with the absolute path to the partitions?

vmkfstools --growfs "/vmfs/devices/disks/Device:partition" "/vmfs/devices/disks/Device:partition"

André

Reply
0 Kudos
jacotec
Enthusiast
Enthusiast
Jump to solution

Hi André,

yes, I did ... sorry, that quote was my last desperate attempt running it in the /vmfs/devices/disks directory just in case the string of the parameters would be too long.

With the full paths it has the same error message.

[root@VMServer2:~] vmkfstools --growfs "/vmfs/devices/disks/naa.6782bcb0349bdb00ff00005f05e6a18e:3" "/vmfs/devices/disks/naa.6782bcb0349bdb00ff00005f05e6a18e:

3"

Not found

Error: No such file or directory

Reply
0 Kudos
a_p_
Leadership
Leadership
Jump to solution

Please post the output of

partedUtil getptbl "/vmfs/devices/disks/naa.6782bcb0349bdb00ff00005f05e6a18e"

André

Reply
0 Kudos
jacotec
Enthusiast
Enthusiast
Jump to solution

[root@VMServer2:~] partedUtil getptbl "/vmfs/devices/disks/naa.6782bcb0349bdb00ff00005f05e6a18e"

gpt

1215669 255 63 19529728000

1 64 8191 C12A7328F81F11D2BA4B00A0C93EC93B systemPartition 128

5 8224 520191 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0

6 520224 1032191 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0

7 1032224 1257471 9D27538040AD11DBBF97000C2911D1B8 vmkDiagnostic 0

8 1257504 1843199 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0

9 1843200 7086079 9D27538040AD11DBBF97000C2911D1B8 vmkDiagnostic 0

2 7086080 15472639 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0

3 15472640 19529727966 AA31E02A400F11DB9590000C2911D1B8 vmfs 0

[root@VMServer2:~]

Reply
0 Kudos
a_p_
Leadership
Leadership
Jump to solution

The partitions look ok.

However, after another look at your initial post I see that the VMFS partition

mpx.vmhba32:C0:T0:L0:3

is missing in /dev/disks.

Did you already try to rescan the storage adapter/device, and/or reboot the host to see whether this helps?

Just one other question. Which version/build of ESXi, and the Embedded Host Client do you use (see "Help" -> "About")?

André

Reply
0 Kudos
a_p_
Leadership
Leadership
Jump to solution

Branched from Can't grow VMFS Datastore​ to a new discussion.

Reply
0 Kudos
jacotec
Enthusiast
Enthusiast
Jump to solution

Hi André,

the mpx.vmhba32:C0:T0:L0:3 is the SD card where I boot ESXi from. Complete different drive in that case.

As I already had the old ESXi 6.0.0 on the boot partition of the RAID disk (I've changed to an independent boot medium (SD) this summer and also did the upgrade to ESXi 6.5), I've booted to the old 6.0 this morning. In that case only the "naa" drive (RAID) is there, but the vmkfstools --growfs fails with the same error message.

Current version is ESXi 6.5.0 U2 Build 10719125.

Marco

Reply
0 Kudos
a_p_
Leadership
Leadership
Jump to solution

I'm not sure whether the issue could be related to the "dual-boot" configuration. "Not Found" is no common error message for the vmkfstools command, so if it's ok with you, I'd like to try updating the host's HDD installation - without the SD card plugged into the host -  to the same version/build that's installed on the SD card.

The below steps assume that you are using a Dell customized ESXi image (you mentioned the H700 controller) without any additional required drivers!?

I also assume that you know that the H700 controller is not supported for ESXi 6.5, but since you've been running the host with that version, it seems to work.

If the assumptions are correct, you may follow these steps:

  • shutdown the host, remove the SD card and boot the host to the local HDD's version (v6.0)
  • upload the Dell customized v6.5U2 offline bundle (.zip file) to a folder on a datastore (likely VMware-VMvisor-Installer-6.5.0.update02-10719125.x86_64-DellEMC_Customized-A07.zip)
  • place the host into maintenence mode, enable SSH, and open a putty session
  • determine the offline bundle's profile name: esxcli software sources profile list -d /vmfs/volumes/<shared-datastore>/<folder>/<zip-file>
  • do a dry-run upgrade: esxcli software profile install -d /vmfs/volumes/<shared-datastore>/yfolder>/<zip-file> -p <profile name> --ok-to-remove --dry-run
  • if the result looks as expected, run the above command again without the --dry-run option
  • once the above command completes successfully, enter reboot to reboot the host

André

Reply
0 Kudos
jacotec
Enthusiast
Enthusiast
Jump to solution

Finally I made it! It's a bunch of work unfortunately.

I've read somewhere in the internet that one solved the same issue by resigning the datastore. As I had absolutely no chance to unmount the datastore in my ESXi (even after unregistering all VM's, reconfiguring scratch and log dir away from it - it always said "file system busy") I took the hard path.

I've inserted my backup SD card into the host and did a full new ESXi install. Then I've imported the Datastore with resigning it. After I've done this, the vmkfstools --growfs worked fine on the partition and it showed me the new capacity of 9.09TB.

Finally I've reconfigured the complete host, vswitches, network setup (I did not dare to reimport my old config backup as I've feared my datastore connection will screw up then) and so on, then I've reregistered my 21 VM's, recreated autostart configuration and finally refreshed all the VM's in Veeam.

All that took me three hours, but now it's running with the full datastore size.

So all guys with the same issue should take the path and resign the datastore with all the pain coming with that.

Marco

BTW: Having the ESXi installation sitting on it's own drive definitely helped here. I can recommend that to all to boot ESXi from an internal SD card or USB stick.

jaatith
Contributor
Contributor
Jump to solution

Out of all the countless threads and blog posts, this was what finally solved it for me. Our scenario was such that we had a server with local storage, which had been expanded (RAID5 which had been widened by a few disks). After a few reboots, ESXi (6.0u3) realized the underlying storage had grown, but I was unable to grow the VMFS due to either the "Read-only file system" error, or the File Not Found", while running the vmkfstools --growfs. Based on your post, I shut down all VMs, unmounted the lone datastore (retried the growfs command at this point, to no avail), rebooted the host, remounted with the option to generate a new signature, and voilà: the new growfs command succeeded and the size was finally present.

Post-work included re-registering all VMs and modifying one VM's vms by hand which still had disks pointing to the old datastore uuid.

Reply
0 Kudos
maraouf
Contributor
Contributor
Jump to solution

Hello,

I think am running into the same situation, can you please detail the steps and commands you used to solve this issue?

Reply
0 Kudos