Hi,
I have a visor host which currently has 290M and only 5MB left. I would like to increase the partition space of this.
Do we have a mechanism to increase this without bringing down the OS ? Looking for something like gpart kinds.
Can anyone help me regarding this.
Thanks in advance,
Sudks
OS X?
If you have the option to HOT ADD depending on the version of ESXi you purchased, you can just mount a drive. You don't have many options other than shut down and expand the disk.
Thanks for the reply.
I already have free memory on the storage, I just want to expand the visorfs partition size.
Wanted to know if this can be done ?
/var/log # df -h
Filesystem Size Used Available Use% Mounted on
visorfs 286.9M 281.0M 5.9M 98% /
vfat 4.0G 2.8M 4.0G 0% /vmfs/volumes/4a6a0658-26f213e5-05c7-00237de98c42
vmfs3 199.8G 11.0G 188.8G 5% /vmfs/volumes/4a619095-a856c8fb-101d-0024817eae77
vfat 249.7M 79.8M 169.9M 32% /vmfs/volumes/0706e67b-d88f9e8c-ec8f-66d1221b87b9
vfat 249.7M 78.6M 171.1M 31% /vmfs/volumes/5ab78476-4a4264c7-61ef-59d4efcf3894
vfat 285.9M 242.4M 43.5M 85% /vmfs/volumes/57f270e1-bfede6de-b762-677b93f10117
vmfs3 227.8G 562.0M 227.2G 0% /vmfs/volumes/4a6a0673-999334ea-fde4-00237de98c42
/var/log #
I don't believe it is possible to increase ramdisk size. Are you experiencing problems?
Yup, when i try to modify the interface configuration like adding new vmknic etc... it doesnt have space to write in /etc/ itself.... so want a mechanism to increase this.. even though i have 250G of storage my pxi boot configured only 250 MB for visorfs.... DO you know how i can modify the visorfs on pxi configuration to make it bigger ? Is this part of the configuration file ? I havent tried before ....
You might free up some space by cleaning up /tmp or /var/logs.
Dave
VMware Communities User Moderator
New book in town - vSphere Quick Start Guide -http://www.yellow-bricks.com/2009/08/12/new-book-in-town-vsphere-quick-start-guide/.
Do you have a system or PCI card working with VMDirectPath? Submit your specs to the Unofficial VMDirectPath HCL - http://www.vm-help.com/forum/viewforum.php?f=21.
As Dave said you can clean up some log files. You can change the location of the log files from within the 4 client Configuration / Software / Advanced / Syslog / Local.
visorfs is a ramdisk not had disk space. ramdisk space is usually set up early in the boot process.
Should add that since it is / (root) you don't want to mess with it since all other partitions are mounted to it.
I'm facing a similar situation. After applying a patch using vSphere Host Update Utility and rebooting the server, esxupdate stopped working with the following error:
~ # esxupdate check
Encountered error: VisorSetupError
Error message: There was an error setting up ESXi installation destination
Unable to mount visorfs. Mount returned error (255). Please see esxupdate.log for more details.
~ #
The log file (esxupdate.log) contais this:
DEBUG: lock: Lock file /var/run/esxupdate.pid created with PID 11317
INFO: _visor: Mounting visorfs 1/750 MB on /tmp/stage
ERROR: _visor: mount returned nonzero status 255
Command:
mount -t visorfs -o 1,750,01777,updatestg updatestg /tmp/stage
Output:
mount: mounting updatestg on /tmp/stage failed: No space left on device
Df shows:
~ # df -h
Filesystem Size Used Available Use% Mounted on
visorfs 218.3M 180.7M 37.6M 83% /
vmfs3 460.8G 442.8G 17.9G 96% /vmfs/volumes/4ac7a9f8-7a60d538-50eb-001cc0b96dd2
vfat 249.7M 60.3M 189.4M 24% /vmfs/volumes/0ae02d7f-7ff674e8-ca48-3ad4cd18b555
vfat 4.0G 108.4M 3.9G 3% /vmfs/volumes/4ac7a789-f4515280-7f27-001cc0b96dd2
vfat 249.7M 59.3M 190.4M 24% /vmfs/volumes/c5e95871-832a9a6c-690e-1985e51b5add
vfat 285.9M 242.8M 43.1M 85% /vmfs/volumes/c2a427e4-2d317086-fef9-b5750d88536c
vmfs3 232.8G 232.5G 206.0M 100% /vmfs/volumes/4b5e2115-04d71ef8-9b67-001cc0b96dd2
~ #
Already tried deleting logs (/var/log) and tmp files, but got no luck.
Would anyone have a clue?
Cheers
witch patch did you applied to get the partition full ?
i got the same issue with the build 219382 patch.
i couldn't even scan the host for updates since i do some cleanup in the filesystem
Hey RS, thanks for answering.
Actually i don't know which patch was installed. I've just executed Host Update Utility and tried applying all patches at once. But, according to esxupdate.log, the last successfully installed patch was "deb_vmware-esx-firmware_4.0.0-1.10.219382".
Regarding space, that's strange, why would i need to alocate space in a partition when i'm only trying to mount another filesystem on one of its directories. You should not need available space for that. This is exactly the purpose of mount, right?
Anyway, i really don't know what else could be cleaned on "/". Would you have any suggestions?
This is the command that is failing before the updates/scan:
~ # mount -t visorfs -o 1,750,01777,updatestg updatestg /tmp/stage
mount: mounting updatestg on /tmp/stage failed: No space left on device
~ #
What fdisk -ul returns you ?
What fdisk -ul returns you ?
Returns this:
~ # fdisk -ul
Disk /dev/disks/t10.ATA_____ST3250318AS_________________________________________9VM11TGA: 250.0 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders, total 488397168 sectors
Units = sectors of 1 * 512 = 512 bytes
Device Boot Start End Blocks Id System
/dev/disks/t10.ATA_____ST3250318AS_________________________________________9VM11TGAp1 128 488392064 244195968+ fb VMFS
Disk /dev/disks/t10.ATA_____SAMSUNG_HD502HI_________________________S1ZVJ50S900128______: 500.1 GB, 500107862016 bytes
64 heads, 32 sectors/track, 476940 cylinders, total 976773168 sectors
Units = sectors of 1 * 512 = 512 bytes
Device Boot Start End Blocks Id System
/dev/disks/t10.ATA_____SAMSUNG_HD502HI_________________________S1ZVJ50S900128______p1 8192 1843199 917504 5 Extended
/dev/disks/t10.ATA_____SAMSUNG_HD502HI_________________________S1ZVJ50S900128______p2 1843200 10229759 4193280 6 FAT16
/dev/disks/t10.ATA_____SAMSUNG_HD502HI_________________________S1ZVJ50S900128______p3 10229760 976773119 483271680 fb VMFS
/dev/disks/t10.ATA_____SAMSUNG_HD502HI_________________________S1ZVJ50S900128______p4 * 32 8191 4080 4 FAT16 <32M
/dev/disks/t10.ATA_____SAMSUNG_HD502HI_________________________S1ZVJ50S900128______p5 8224 520191 255984 6 FAT16
/dev/disks/t10.ATA_____SAMSUNG_HD502HI_________________________S1ZVJ50S900128______p6 520224 1032191 255984 6 FAT16
/dev/disks/t10.ATA_____SAMSUNG_HD502HI_________________________S1ZVJ50S900128______p7 1032224 1257471 112624 fc VMKcore
/dev/disks/t10.ATA_____SAMSUNG_HD502HI_________________________S1ZVJ50S900128______p8 1257504 1843199 292848 6 FAT16
Partition table entries are not in disk order
~ #
try to free up /vmfs/volumes/57f270e1-bfede6de-b762-677b93f10117
RS, thanks, but that volume does not seem to exist in my system.
/vmfs/volumes # ls -l
drwxr-xr-x 1 root root 8 Jan 1 1970 0ae02d7f-7ff674e8-ca48-3ad4cd18b555
drwxr-xr-x 1 root root 8 Jan 1 1970 4ac7a789-f4515280-7f27-001cc0b96dd2
drwxr-xr-t 1 root root 1820 Feb 1 18:26 4ac7a9f8-7a60d538-50eb-001cc0b96dd2
drwxr-xr-t 1 root root 1120 Jan 25 23:24 4b5e2115-04d71ef8-9b67-001cc0b96dd2
l----
0 root root 1984 Jan 1 1970 Hypervisor1 -> c5e95871-832a9a6c-690e-1985e51b5add
l----
0 root root 1984 Jan 1 1970 Hypervisor2 -> 0ae02d7f-7ff674e8-ca48-3ad4cd18b555
l----
0 root root 1984 Jan 1 1970 Hypervisor3 -> c2a427e4-2d317086-fef9-b5750d88536c
drwxr-xr-x 1 root root 8 Jan 1 1970 c2a427e4-2d317086-fef9-b5750d88536c
drwxr-xr-x 1 root root 8 Jan 1 1970 c5e95871-832a9a6c-690e-1985e51b5add
l----
0 root root 1984 Jan 1 1970 datastore1 -> 4ac7a9f8-7a60d538-50eb-001cc0b96dd2
l----
0 root root 1984 Jan 1 1970 datastore2 -> 4b5e2115-04d71ef8-9b67-001cc0b96dd2
/vmfs/volumes # df -h
Filesystem Size Used Available Use% Mounted on
visorfs 218.3M 182.8M 35.4M 84% /
vmfs3 232.8G 232.5G 206.0M 100% /vmfs/volumes/4b5e2115-04d71ef8-9b67-001cc0b96dd2
vfat 285.9M 242.8M 43.1M 85% /vmfs/volumes/c2a427e4-2d317086-fef9-b5750d88536c
vfat 249.7M 60.3M 189.4M 24% /vmfs/volumes/0ae02d7f-7ff674e8-ca48-3ad4cd18b555
vfat 4.0G 108.5M 3.9G 3% /vmfs/volumes/4ac7a789-f4515280-7f27-001cc0b96dd2
vfat 249.7M 59.3M 190.4M 24% /vmfs/volumes/c5e95871-832a9a6c-690e-1985e51b5add
vmfs3 460.8G 386.0G 74.7G 84% /vmfs/volumes/4ac7a9f8-7a60d538-50eb-001cc0b96dd2
/vmfs/volumes #
sorry, i mean /vmfs/volumes/c2a427e4-2d317086-fef9-b5750d88536c
RS, thanks. This is the whole volume content. Is it ok to delete a few of these files? It doesn't look right...
/vmfs/volumes/c2a427e4-2d317086-fef9-b5750d88536c # ls -lR
.:
drwxr-xr-x 1 root root 8 Jan 1 1970 conf
drwxr-xr-x 1 root root 8 Jan 1 1970 db
drwxr-xr-x 1 root root 8 Jan 1 1970 etc
drwxr-xr-x 1 root root 8 Jan 1 1970 opt
drwxr-xr-x 1 root root 8 Jan 1 1970 packages
drwxr-xr-x 1 root root 8 Jan 1 1970 var
drwxr-xr-x 1 root root 8 Jan 1 1970 vmupgrade
./conf:
./db:
-rwx------ 1 root root 138602 Oct 3 19:35 esxupdate.log
./etc:
drwxr-xr-x 1 root root 8 Jan 1 1970 opt
./etc/opt:
./opt:
./packages:
drwxr-xr-x 1 root root 8 Jan 1 1970 4.0.0
drwxr-xr-x 1 root root 8 Jan 1 1970 usr
./packages/4.0.0:
drwxr-xr-x 1 root root 8 Jan 1 1970 client
drwxr-xr-x 1 root root 8 Jan 1 1970 floppies
drwxr-xr-x 1 root root 8 Jan 1 1970 tools-upgraders
drwxr-xr-x 1 root root 8 Jan 1 1970 vmtools
./packages/4.0.0/client:
-rwx------ 1 root root 116606862 Feb 1 16:32 VMware-viclient.exe
./packages/4.0.0/floppies:
-rwx------ 1 root root 1474560 Feb 1 16:32 pvscsi-1.0.0.5-signed-Windows2003.flp
-rwx------ 1 root root 1474560 Feb 1 16:32 pvscsi-1.0.0.5-signed-Windows2008.flp
-rwx------ 1 root root 1474560 Feb 1 16:32 vmscsi-1.2.1.0-signed.flp
./packages/4.0.0/tools-upgraders:
-rwx------ 1 root root 458752 Feb 1 16:32 VMwareToolsUpgrader.exe
-rwx------ 1 root root 189440 Feb 1 16:32 VMwareToolsUpgrader9x.exe
-rwx------ 1 root root 190976 Feb 1 16:32 VMwareToolsUpgraderNT.exe
-rwx------ 1 root root 1596 Feb 1 16:32 run_upgrader.sh
-rwx------ 1 root root 543360 Feb 1 16:32 vmware-tools-upgrader-32
-rwx------ 1 root root 624474 Feb 1 16:32 vmware-tools-upgrader-64
./packages/4.0.0/vmtools:
-rwx------ 1 root root 11259904 Feb 1 16:32 freebsd.iso
-rwx------ 1 root root 256 Feb 1 16:32 freebsd.iso.sig
-rwx------ 1 root root 51427328 Feb 1 16:32 linux.iso
-rwx------ 1 root root 256 Feb 1 16:32 linux.iso.sig
-rwx------ 1 root root 620544 Feb 1 16:32 netware.iso
-rwx------ 1 root root 256 Feb 1 16:32 netware.iso.sig
-rwx------ 1 root root 8151040 Feb 1 16:32 solaris.iso
-rwx------ 1 root root 256 Feb 1 16:32 solaris.iso.sig
-rwx------ 1 root root 451 Feb 1 16:32 tools-key.pub
-rwx------ 1 root root 13467648 Feb 1 16:32 winPre2k.iso
-rwx------ 1 root root 256 Feb 1 16:32 winPre2k.iso.sig
-rwx------ 1 root root 45744128 Feb 1 16:32 windows.iso
-rwx------ 1 root root 256 Feb 1 16:32 windows.iso.sig
./packages/usr:
drwxr-xr-x 1 root root 8 Jan 1 1970 lib
./packages/usr/lib:
drwxr-xr-x 1 root root 8 Jan 1 1970 ipkg
./packages/usr/lib/ipkg:
drwxr-xr-x 1 root root 8 Jan 1 1970 info
-rwx------ 1 root root 355 Feb 1 16:32 status
./packages/usr/lib/ipkg/info:
-rwx------ 1 root root 345 Feb 1 16:32 vmware-esx-tools-light.control
-rwx------ 1 root root 1157 Feb 1 16:32 vmware-esx-tools-light.list
-rwx------ 1 root root 194 Feb 1 16:32 vmware-esx-viclient.control
-rwx------ 1 root root 50 Feb 1 16:32 vmware-esx-viclient.list
./var:
drwxr-xr-x 1 root root 8 Jan 1 1970 core
drwxr-xr-x 1 root root 8 Jan 1 1970 opt
./var/core:
drwxr-xr-x 1 root root 8 Jan 1 1970 old_cores
./var/core/old_cores:
./var/opt:
./vmupgrade:
/vmfs/volumes/c2a427e4-2d317086-fef9-b5750d88536c #
Which files would you suggest to be deleted?
i tried with VMware-viclient.exe (it works for me)
~
# esxupdate check
Encountered error: VisorSetupError
Error message: There was an error setting up ESXi installation destination
Unable to mount visorfs. Mount returned error (255). Please see esxupdate.log for more details.
~ #
Same error, even after doing some serious cleaning on that filesystem.
~ # df -h
Filesystem Size Used Available Use% Mounted on
visorfs 218.3M 182.4M 35.9M 84% /
vmfs3 232.8G 232.5G 206.0M 100% /vmfs/volumes/4b5e2115-04d71ef8-9b67-001cc0b96dd2
vfat 285.9M 123.2M 162.7M 43% /vmfs/volumes/c2a427e4-2d317086-fef9-b5750d88536c
vfat 249.7M 60.3M 189.4M 24% /vmfs/volumes/0ae02d7f-7ff674e8-ca48-3ad4cd18b555
vfat 4.0G 108.5M 3.9G 3% /vmfs/volumes/4ac7a789-f4515280-7f27-001cc0b96dd2
vfat 249.7M 59.3M 190.4M 24% /vmfs/volumes/c5e95871-832a9a6c-690e-1985e51b5add
vmfs3 460.8G 386.2G 74.6G 84% /vmfs/volumes/4ac7a9f8-7a60d538-50eb-001cc0b96dd2
~ #
Please check how much memory capacity is left on your host. You can check this by connecting VI-Client to your host and open the resource allocation tab. You should have 750 MB available so that the upgrade can succeed.
To increase the memory capacity on your host, you can migrate/power off VMs, or try to decrease their memory reservation.
If this doesn't work, please attach your vmkernel.log file and I'll take a look.
Thanks,
Christoph
I don't think it could be a memory problem since you can't update without being in maintenance mode right ?