VMware Cloud Community
Sparticus13
Contributor
Contributor

vmkfstools --growfs returning Error: No such file or directory

Hi all,

I am trying to grow a datastore.  The reason for this is that I have swapped drives to some larger ones.  The drive was the one where ESXi was installed as well as the local data store.  I started by shutting everything down. I placed the ESXi drive in my laptop along with me new larger capacity drive to replace it.  I booted parted magic from a USB and cloned the disk over to the new one.  After the clone I opened the partition manager in parted magic to have it auto correct the GPT size to use the new drives capacity.  Everything looked fine.  After this I put the new drive in and booted back up the server.  Everything started just fine.  I then had to re add the data store, after this I tested my VMs and everything was fine. ESXi was reporting the correct drive size for the new drive.  I just needed to resize the datastore partition to use the unallocated space.

I follwed this guide.

http://kb.vmware.com/selfservice/search.do?cmd=displayKC&externalId=2002461

Everything went fine up to the last step.  I was able to use partedUtil resize command with no problem.  I chaecked after and the end sector had been correctly changed for the datasore partition.  Also verified the partition was at the end of the table with the free space after it. 

When I try the last command vmkfstools --growfs it always comes back with Error No such file or directory.  I have verified it is the correct name and it is listed in /dev/devices/ .

Below is my exact command with the device name and partion number.

vmkfstools --growfs "/vmfs/devices/disks/t10.ATA_____MKNSSDCR240GB_________________________00_MKN1210A0000051844__:3" "/vmfs/devices/disks/t10.ATA_____MKNSSDCR240GB_________________________00_MKN1210A0000051844__:3"

I know the device name looks weird and really long but that's really what it is.  It worked in all the prior commands just fine.

Any ideas?

Thanks,

Chris

18 Replies
Sreejesh_D
Virtuoso
Virtuoso

incase you haven't tried increasing through vSphere client, try it. Since underlying partition is already expanded you should be able to grow VMFS from vSphere client.

http://kb.vmware.com/kb/1017662

Sparticus13
Contributor
Contributor

I tried that as well, both in vCenter Server and directly to the ESXi host in the vClient.   If I go to properties of the datastore the increase button is greyed out.  Next to it the total capacity shows 106.75GB.  This is the current VMFS size but not the new drive or partiton size.  Below it there is the Extent box with one drive listed.  The same drive with a capacity of 218.70GB.  However, I can't click increase.

Thanks,

Chris

Reply
0 Kudos
a_p_
Leadership
Leadership

Please provide the output of

ls /vmfs/devices/disks
as well as
partedUtil getptbl ...

to see how the partitions currently look like.

For how to use partedUtil see http://kb.vmware.com/kb/1036609

André

Reply
0 Kudos
Sparticus13
Contributor
Contributor

Here are the outputs.


t10.ATA_____MKNSSDCR240GB_________________________00_MKN1210A0000051819__
t10.ATA_____MKNSSDCR240GB_________________________00_MKN1210A0000051819__:1
t10.ATA_____MKNSSDCR240GB_________________________00_MKN1210A0000051844__
t10.ATA_____MKNSSDCR240GB_________________________00_MKN1210A0000051844__:1
t10.ATA_____MKNSSDCR240GB_________________________00_MKN1210A0000051844__:2
t10.ATA_____MKNSSDCR240GB_________________________00_MKN1210A0000051844__:3
t10.ATA_____MKNSSDCR240GB_________________________00_MKN1210A0000051844__:5
t10.ATA_____MKNSSDCR240GB_________________________00_MKN1210A0000051844__:6
t10.ATA_____MKNSSDCR240GB_________________________00_MKN1210A0000051844__:7
t10.ATA_____MKNSSDCR240GB_________________________00_MKN1210A0000051844__:8
vml.01000000004d4b4e31323130413030303030353138313920204d4b4e535344
vml.01000000004d4b4e31323130413030303030353138313920204d4b4e535344:1
vml.01000000004d4b4e31323130413030303030353138343420204d4b4e535344
vml.01000000004d4b4e31323130413030303030353138343420204d4b4e535344:1
vml.01000000004d4b4e31323130413030303030353138343420204d4b4e535344:2
vml.01000000004d4b4e31323130413030303030353138343420204d4b4e535344:3
vml.01000000004d4b4e31323130413030303030353138343420204d4b4e535344:5
vml.01000000004d4b4e31323130413030303030353138343420204d4b4e535344:6
vml.01000000004d4b4e31323130413030303030353138343420204d4b4e535344:7
vml.01000000004d4b4e31323130413030303030353138343420204d4b4e535344:8

The below is for the t10.ATA_____MKNSSDCR240GB_________________________00_MKN1210A0000051844__ disk.

gpt
29185 255 63 468862128
1 64 8191 C12A7328F81F11D2BA4B00A0C93EC93B systemPartition 128
5 8224 520191 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0
6 520224 1032191 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0
7 1032224 1257471 9D27538040AD11DBBF97000C2911D1B8 vmkDiagnostic 0
8 1257504 1843199 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0
2 1843200 10229759 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0
3 10229760 468862094 AA31E02A400F11DB9590000C2911D1B8 vmfs 0

So to me the disk is showing up fine, and partion 3 is on the end and has been reszied to use the last availble sector.  I used getUsableSector command to find the last usable sector when changing it.

Thanks,

Chris

Reply
0 Kudos
a_p_
Leadership
Leadership

I think the end sector is not correct. AFAIK it should be 468857024 (cylindes * heads * sectors - 1).

André

Reply
0 Kudos
Sparticus13
Contributor
Contributor

I will give it a try. According to the guide -1 is correct but when I did the getUsableSectors command to be sure it gave me that instead.  I will try it and see what happens.

Chris

Reply
0 Kudos
Sparticus13
Contributor
Contributor

I tried but get the following,

Error: Read-only file system during write on /dev/disks/t10.ATA_____MKNSSDCR240GB_________________________00_MKN1210A0000051844__

Not sure if this means I need to shut down the VM's on it or put the host in maintenance.  If so I will need to wait to later tonight to do that.

Reply
0 Kudos
Sparticus13
Contributor
Contributor

I ended up putting my original disks back in and trying again last night.  I started the same with cloning the old to the new disk and correcting the partition table for the correct size.  This time I choose to re signature the datastores when I re added them as opposed to keeping them.  I was able to follow the guide and resize and grow the datastore just fine.  I then had to edit the .vmx files to correct the disk path UUID's and was able to re add them.

So in the end I just needed to choose re signature when adding the datastores.

Thanks

Chris

Reply
0 Kudos
cassioac
Contributor
Contributor

Hi, I have the same problem..

What do you mean by when you add your datastore?

After I cloned the disk the datastore was already there, I didn't need to add it back to the esxi host.

What am I missing?

Thanks

Reply
0 Kudos
cjr222
Contributor
Contributor

I have the exact same issue.  Anyone have an update on this?  If it is because the -1 for the end sector is wrong, then what is the right calculation and if I already resized it how do it fix it?  I tried the same resize command I get that same read only error.

Thanks in advance

Reply
0 Kudos
cassioac
Contributor
Contributor

You must resign the datastore, just unmount it and add storage with a new signature. You will also need to readd the virtual machines.

Reply
0 Kudos
VadimYakovlev
Contributor
Contributor

Sure that's a very old thread, but I've hit same problem in 2019 with ESXi 6.5. Increased RAID array with datastore, then increased the partition with partedUtil, and got stuck with "vmkfstools --growfs" reporting "No such file or directory". I've tried passing device name (disk:partition) with and without quotes, tried using different path - /vmfs/devices/disks/... and /dev/disks/..., rebooted host, dismounted and resignatured the datastore - all to no success. GUI wasn't helpful either, it was showing an empty list of devices when I was trying to increase my single existing datastre extent. And then finally what worked was using a relative partition name, without path!

cd /dev/disks

vmkfstools --growfs naa....:1 naa...:1

(My disk name starts from "naa...") And it magically succeeded. No idea why - maybe vmkfstools has some bug in device path buffer allocation or parsing, or something else.

bizurkhate
Contributor
Contributor

My Local ATA Disk is named "t10.ATA_____CT1000MX500SSD1_________________________1914E1F7B0D2________" and I'm also getting the "Not found" error. VadimYakovlev​ what should I changed my name to like you did with "naa....:1 naa...:1" ??

Reply
0 Kudos
VadimYakovlev
Contributor
Contributor

You need to specify partition name for  "vmkfstools -growfs" command, which has a form <drive name>:<partition number>. To view your disks and partitions, do "ls -l /dev/disks" and locate the correct partition. In my case partition number was 1, so I had to append ":1" to disk name, your value may differ.

Reply
0 Kudos
bizurkhate
Contributor
Contributor

VadimYakovlev Thank you so much for responding.

Here's what I'm trying now.

[root@server:~] ls -l /dev/disks

total 1953525053

-rw-------    1 root     root     1000204886016 Feb 11 01:21 t10.ATA_____CT1000MX500SSD1_________________________1914E1F7B0D2________

-rw-------    1 root     root       4161536 Feb 11 01:21 t10.ATA_____CT1000MX500SSD1_________________________1914E1F7B0D2________:1

-rw-------    1 root     root     4293918720 Feb 11 01:21 t10.ATA_____CT1000MX500SSD1_________________________1914E1F7B0D2________:2

-rw-------    1 root     root     992282874368 Feb 11 01:21 t10.ATA_____CT1000MX500SSD1_________________________1914E1F7B0D2________:3

-rw-------    1 root     root     262127616 Feb 11 01:21 t10.ATA_____CT1000MX500SSD1_________________________1914E1F7B0D2________:5

-rw-------    1 root     root     262127616 Feb 11 01:21 t10.ATA_____CT1000MX500SSD1_________________________1914E1F7B0D2________:6

-rw-------    1 root     root     115326976 Feb 11 01:21 t10.ATA_____CT1000MX500SSD1_________________________1914E1F7B0D2________:7

-rw-------    1 root     root     299876352 Feb 11 01:21 t10.ATA_____CT1000MX500SSD1_________________________1914E1F7B0D2________:8

-rw-------    1 root     root     2684354560 Feb 11 01:21 t10.ATA_____CT1000MX500SSD1_________________________1914E1F7B0D2________:9

lrwxrwxrwx    1 root     root            72 Feb 11 01:21 vml.01000000003139313445314637423044322020202020202020435431303030 -> t10.ATA_____CT1000MX500SSD1_________________________1914E1F7B0D2________

lrwxrwxrwx    1 root     root            74 Feb 11 01:21 vml.01000000003139313445314637423044322020202020202020435431303030:1 -> t10.ATA_____CT1000MX500SSD1_________________________1914E1F7B0D2________:1

lrwxrwxrwx    1 root     root            74 Feb 11 01:21 vml.01000000003139313445314637423044322020202020202020435431303030:2 -> t10.ATA_____CT1000MX500SSD1_________________________1914E1F7B0D2________:2

lrwxrwxrwx    1 root     root            74 Feb 11 01:21 vml.01000000003139313445314637423044322020202020202020435431303030:3 -> t10.ATA_____CT1000MX500SSD1_________________________1914E1F7B0D2________:3

lrwxrwxrwx    1 root     root            74 Feb 11 01:21 vml.01000000003139313445314637423044322020202020202020435431303030:5 -> t10.ATA_____CT1000MX500SSD1_________________________1914E1F7B0D2________:5

lrwxrwxrwx    1 root     root            74 Feb 11 01:21 vml.01000000003139313445314637423044322020202020202020435431303030:6 -> t10.ATA_____CT1000MX500SSD1_________________________1914E1F7B0D2________:6

lrwxrwxrwx    1 root     root            74 Feb 11 01:21 vml.01000000003139313445314637423044322020202020202020435431303030:7 -> t10.ATA_____CT1000MX500SSD1_________________________1914E1F7B0D2________:7

lrwxrwxrwx    1 root     root            74 Feb 11 01:21 vml.01000000003139313445314637423044322020202020202020435431303030:8 -> t10.ATA_____CT1000MX500SSD1_________________________1914E1F7B0D2________:8

lrwxrwxrwx    1 root     root            74 Feb 11 01:21 vml.01000000003139313445314637423044322020202020202020435431303030:9 -> t10.ATA_____CT1000MX500SSD1_________________________1914E1F7B0D2________:9

Tried this:

[root@server:/dev/disks] vmkfstools --growfs vml.01000000003139313445314637423044322020202020202020435431303030:3 vml.01000000003139313445314637423044322020202020202020435431303030:3

Not found

Error: No such file or directory

Tried this:

[root@server:/dev/disks] vmkfstools --growfs "vml.01000000003139313445314637423044322020202020202020435431303030:3" "vml.01000000003139313445314637423044322020202020202020435431303030:3"

Not found

Error: No such file or directory

Tried this:

[root@server:/dev/disks] vmkfstools --growfs t10.ATA_____CT1000MX500SSD1_________________________1914E1F7B0D2________:3 t10.ATA_____CT1000MX500SSD1_________________________1914E1F7B0D2________:3

Not found

Error: No such file or directory

Then I also tried what I thought you were suggesting:

[root@server:/dev/disks] vmkfstools --growfs vml..:3 vml..:3

Device path name "vml..:3" is not a valid absolute or relative path

Failed to resolve volume device path vml..:3.

Error: No such file or directory

[root@server:/dev/disks] vmkfstools --growfs vml...:3 vml...:3

Device path name "vml...:3" is not a valid absolute or relative path

Failed to resolve volume device path vml...:3.

Error: No such file or directory

Any idea what I can try next? Thanks so much for your help, my server has been down for a week. Smiley Sad

Reply
0 Kudos
VadimYakovlev
Contributor
Contributor

Using names starting from "vml" is not likely to succeed, these are just links to real device names which vmkfstools probably needs.

Also literally typing "vmkfstools --growfs vml..:3 vml..:3" will not work of course - in my first comment I was using ellipsis just to strip down lengthy device name.

In your case the correct command would be this one:

vmkfstools --growfs t10.ATA_____CT1000MX500SSD1_________________________1914E1F7B0D2________:3 t10.ATA_____CT1000MX500SSD1_________________________1914E1F7B0D2________:3

But if you say it also doesn't work, then I'm afraid I don't know why and don't have other ideas Smiley Sad

Reply
0 Kudos
bizurkhate
Contributor
Contributor

Now I'm getting this error https://prnt.sc/s2wg40 when trying to unmount the volume to attempt to reregister.

Reply
0 Kudos
geneguan
Contributor
Contributor

It's an old thread but VadimYakovlev you saved the world! thank you!

vmkfstools --growfs naa.600508b1001c32fe9c31e57d64591887:3 naa.600508b1001c32fe9c31e57d64591887:3

works for me by going into the folder: cd /dev/disks

Reply
0 Kudos