Hi,
I have a visor host which currently has 290M and only 5MB left. I would like to increase the partition space of this.
Do we have a mechanism to increase this without bringing down the OS ? Looking for something like gpart kinds.
Can anyone help me regarding this.
Thanks in advance,
Sudks
Please check how much memory capacity is left on your host. You can check this by connecting VI-Client to your host and open the resource allocation tab. You should have 750 MB available so that the upgrade can succeed.
To increase the memory capacity on your host, you can migrate/power off VMs, or try to decrease their memory reservation.
If this doesn't work, please attach your vmkernel.log file and I'll take a look.
Thanks,
Christoph
I'll try that and post the results in a few minutes. Thanks.
I suppose you have to be in maintenance mode. Could you try to perform the operation again and attach the vmkernel log from /var/log/messages.
Thanks,
Christoph
I don't think it could be a memory problem since you can't update without being in maintenance mode right ?
But that makes sense, since i'm trying to allocate 750 MB of a ramdisk (visorfs).
Before the update, i was able to at least SCAN the server for possible updates without being on maintenance mode. That is not possible anymore after that damned update.
Same problem.
mount -t visorfs -o 1,750,01777,updatestg updatestg /tmp/stage
mount: mounting updatestg on /tmp/stage failed: No space left on device
_mountVisorFS(STAGENAME, mountpath, minMB, maxMB)
File "/lib/python25-visor.zip/pythonroot/vmware/esx4update/platform/_visor.py", line 1029, in _mountVisorFS
VisorSetupError: Unable to mount visorfs. Mount returned error (255). Please see esxupdate.log for more details.
I can confirm that this problems happens even when on maintenance mode and all vm's stopped. I also tried the faulty mount command (executed by esxupdate) reducing the needed RAM, but got the same result:
The regular command executed by esxupdate (750 MB):
~ # mount -t visorfs -o 1,750,01777,updatestg updatestg /tmp/stage
mount: mounting updatestg on /tmp/stage failed: No space left on device
And a blind shot, using only 10 MB:
~ # mount -t visorfs -o 1,10,01777,updatestg updatestg /tmp/stage
mount: mounting updatestg on /tmp/stage failed: No space left on device
~ #
I'm baffled. And the most irritating thing about it is that everything worked just perfectly before applying an update with Host Update Utility.
Can you upload the vmkernel log in /var/log/messages?
Thanks,
Christoph
Can you upload the vmkernel log in /var/log/messages?
Thanks,
Christoph
Here it is:
~ # grep vmkernel /var/log/messages
Feb 3 18:51:47 vmkernel: 0:07:45:39.215 cpu2:8582)Alloc: 8595: p2m update: cannot reserve - cur 1408 1408 rsvd 0 req 1 avail 1408
Feb 3 18:51:47 vmkernel: 0:07:45:39.215 cpu1:4123)Alloc: 8595: p2m update: cannot reserve - cur 1408 1408 rsvd 0 req 1 avail 1408
Feb 3 18:51:47 vmkernel: 0:07:45:39.215 cpu2:4330)Alloc: 8595: p2m update: cannot reserve - cur 1408 1408 rsvd 0 req 1 avail 1408
Feb 3 18:51:47 vmkernel: 0:07:45:39.215 cpu0:4125)Alloc: 8595: p2m update: cannot reserve - cur 1408 1408 rsvd 0 req 1 avail 1408
Feb 3 18:51:47 vmkernel: 0:07:45:39.215 cpu2:4330)Alloc: 8595: p2m update: cannot reserve - cur 1408 1408 rsvd 0 req 1 avail 1408
Feb 3 18:51:47 vmkernel: 0:07:45:39.215 cpu0:4122)Alloc: 8595: p2m update: cannot reserve - cur 1408 1408 rsvd 0 req 1 avail 1408
Feb 3 18:51:47 vmkernel: 0:07:45:39.215 cpu2:4330)Alloc: 8595: p2m update: cannot reserve - cur 1408 1408 rsvd 0 req 1 avail 1408
Feb 3 18:51:47 vmkernel: 0:07:45:39.215 cpu0:4124)Alloc: 8595: p2m update: cannot reserve - cur 1408 1408 rsvd 0 req 1 avail 1408
Feb 3 18:51:47 vmkernel: 0:07:45:39.215 cpu2:4330)Alloc: 8595: p2m update: cannot reserve - cur 1408 1408 rsvd 0 req 1 avail 1408
Feb 3 19:06:11 vmkernel: 0:08:00:04.042 cpu3:8097)DevFS: 2370: Unable to find device: 5d3b23f0-mx-000001-delta.vmdk
Feb 3 19:16:34 vmkernel: 0:08:10:27.145 cpu2:106066)WARNING: UserObj: 565: Failed to crossdup fd 6, fs: def5 oid: 1500000003000000e type CHAR: Busy
Feb 3 19:21:54 vmkernel: 0:08:15:46.450 cpu0:8330)DevFS: 2370: Unable to find device: fe39205-sbs-000001-delta.vmdk
~ #
It was recently reset when trying to free some disk space on "/".
visofs is a ramdisk. Check to see where you have a defined scratch location. If you don't have a scratch location set then all your temp files /host swap will be written to the RAM disk. Scratch location is set vSphere Client / Configuration / Software / Advanced Settings / Scratch Location.
RoFz
You should always start your own post for problems.
Moving scratch was one of my first steps. It's in /vmfs/volumes/4ac7a9f8-7a60d538-50eb-001cc0b96dd2/scratch, which has plenty of space:
~ # df -h
Filesystem Size Used Available Use% Mounted on
visorfs 218.3M 182.6M 35.7M 84% /
vmfs3 232.8G 232.5G 206.0M 100% /vmfs/volumes/4b5e2115-04d71ef8-9b67-001cc0b96dd2
vfat 285.9M 234.4M 51.5M 82% /vmfs/volumes/c2a427e4-2d317086-fef9-b5750d88536c
vfat 249.7M 60.3M 189.4M 24% /vmfs/volumes/0ae02d7f-7ff674e8-ca48-3ad4cd18b555
vfat 4.0G 108.5M 3.9G 3% /vmfs/volumes/4ac7a789-f4515280-7f27-001cc0b96dd2
vfat 249.7M 59.3M 190.4M 24% /vmfs/volumes/c5e95871-832a9a6c-690e-1985e51b5add
vmfs3 460.8G 386.0G 74.7G 84% /vmfs/volumes/4ac7a9f8-7a60d538-50eb-001cc0b96dd2
Hello,
I'm having the same problem, had a post for it too: link
Plenty of mem, scratch is on a default partition:
vfat 4095 346 3749 8% /vmfs/volumes/4ad18088-b5863071-9e7b-002618133353
Etc.
Have you or anybody found a solution yet to the problem not being able to update?
I was having the exact same issue. I was also unable to add the host to vCenter (after having removed it) and was getting an unknown error related to the ramdisk when trying to enable HA for the host. This particular host had 2GB of RAM (it is a lab box) and showed that visorfs was using 290MB of 230MB provisioned, which obviously was a problem. The only solution I found was to add more physical RAM to the server; I added another 1GB module and that indeed fixed everything with no further intervention required.
have same problem atm. any updates on this issue ?
I had this exact same problem. The fix was simply to free up some memory from any pool and then reserve it for system. I gave it 800MB and then rebooted and then all was fine.