Noticed that my ESXi host (5.1.0) wasn't saving any of my configuration changes after a reboot. I dug around a little, and tried to manual backup the config but get the below errors. Any idea how to fix? Thanks,
/vmfs/volumes/6786c87b-5a5f675f-d205-7e31438761eb # /sbin/auto-backup.sh
--- /etc/vmware/hostd/pools.xml
+++ /tmp/auto-backup.49397//etc/vmware/hostd/pools.xml
@@ -5,7 +5,7 @@
<path>host/user</path>
</resourcePool>
<vm id="0000">
- <lastModified>2018-01-21T19:07:02.872503Z</lastModified>
+ <lastModified>2014-06-14T16:59:30.684041Z</lastModified>
<objID>7</objID>
<resourcePool>ha-root-pool</resourcePool>
</vm>
Saving current state in /bootbank
mkdir: can't create directory '/bootbank/state.49409': No space left on device
mv: can't rename '/bootbank/local.tgz.49409': No such file or directory
tar: can't open '/bootbank/state.tgz.49409': No space left on device
failed to create state.tgz
Clock updated.
Time: 19:36:36 Date: 01/21/2018 UTC
/vmfs/volumes/6786c87b-5a5f675f-d205-7e31438761eb # df -h
Filesystem Size Used Available Use% Mounted on
NFS 3.5T 2.0T 1.6T 56% /vmfs/volumes/FreeNAS
VMFS-5 41.5G 20.3G 21.2G 49% /vmfs/volumes/datastore1
vfat 4.0G 19.4M 4.0G 0% /vmfs/volumes/52c7bd74-6d620528-66a5-bc5ff4d5cb13
vfat 249.7M 134.2M 115.5M 54% /vmfs/volumes/6786c87b-5a5f675f-d205-7e31438761eb
vfat 249.7M 8.0K 249.7M 0% /vmfs/volumes/5dc2e371-3fe15827-fe1e-07562aadaf93
vfat 285.8M 193.1M 92.7M 68% /vmfs/volumes/52c7bd4d-350ee0c4-869f-bc5ff4d5cb13
/vmfs/volumes/6786c87b-5a5f675f-d205-7e31438761eb # stat -f /
File: "/"
ID: 100000000 Namelen: 127 Type: visorfs
Block size: 4096
Blocks: Total: 220707 Free: 121057 Available: 121057
Inodes: Total: 524288 Free: 521452
Probably wildly unsafe but finally solved the problem with a
cd /bootbank
rm -fr state.*
Output you requested:
df -h
Filesystem Size Used Available Use% Mounted on
NFS 3.5T 2.5T 1.0T 71% /vmfs/volumes/FreeNAS
VMFS-5 442.0G 71.9G 370.1G 16% /vmfs/volumes/datastore1
vfat 4.0G 35.0M 4.0G 1% /vmfs/volumes/52c7bd74-6d620528-66a5-bc5ff4d5cb13
vfat 249.7M 130.4M 119.4M 52% /vmfs/volumes/6786c87b-5a5f675f-d205-7e31438761eb
vfat 249.7M 928.0K 248.8M 0% /vmfs/volumes/5dc2e371-3fe15827-fe1e-07562aadaf93
vfat 285.8M 194.2M 91.6M 68% /vmfs/volumes/52c7bd4d-350ee0c4-869f-bc5ff4d5cb13
du -sk ;
133860 .
du -xk . | sort -nr | head
133860 .
256 ./state.6969
Try vdf -h and check the /tmp or /root partition space
You might be hitting ramdisk full .. i may be wrong but the above command gives you answer. If the /tmp is full then check which file or files or directory taking space there ... there are lot of issues i have seen with multiple hardware partner drivers keeping their logs in that portion which might fill up
Thanks,
MS
Looks good to me, any other suggestions?
Ramdisk Size Used Available Use% Mounted on
root 32M 440K 31M 1% --
etc 28M 164K 27M 0% --
tmp 192M 4K 191M 0% --
hostdstats 223M 1M 221M 0% --
Bump
Any ideas, still experiencing this issue and haven't found anything through google.
Thanks,
Hi,
cd /bootbank;
and run
df -h
du -sk ;
du -xk . | sort -nr | head
post these outputs;
Probably wildly unsafe but finally solved the problem with a
cd /bootbank
rm -fr state.*
Output you requested:
df -h
Filesystem Size Used Available Use% Mounted on
NFS 3.5T 2.5T 1.0T 71% /vmfs/volumes/FreeNAS
VMFS-5 442.0G 71.9G 370.1G 16% /vmfs/volumes/datastore1
vfat 4.0G 35.0M 4.0G 1% /vmfs/volumes/52c7bd74-6d620528-66a5-bc5ff4d5cb13
vfat 249.7M 130.4M 119.4M 52% /vmfs/volumes/6786c87b-5a5f675f-d205-7e31438761eb
vfat 249.7M 928.0K 248.8M 0% /vmfs/volumes/5dc2e371-3fe15827-fe1e-07562aadaf93
vfat 285.8M 194.2M 91.6M 68% /vmfs/volumes/52c7bd4d-350ee0c4-869f-bc5ff4d5cb13
du -sk ;
133860 .
du -xk . | sort -nr | head
133860 .
256 ./state.6969