I am trying to determine why we're getting errors stating 'Failed to open "/var/log/vmware/journaly/1432081859.209" for write: there is no space left on device' when trying to start up any guest on a specific ESXi 5.1 host. After connecting to the ESXi host via ssh, I run 'df' I don't see any information regarding the /, /var, or any other OS related file systems. I'm fairly certain the error I'm seeing isn't necessarily due to /var filling up as I see some processes actually writing to files in /var.
~ # df -h
Filesystem Size Used Available Use% Mounted on
VMFS-5 2.0T 995.9G 1.0T 49% /vmfs/volumes/RHVM2Lun01
VMFS-5 2.0T 697.6G 1.3T 34% /vmfs/volumes/RHVM2Lun02
VMFS-5 2.0T 934.6G 1.1T 46% /vmfs/volumes/RHVM2Lun03
VMFS-5 2.0T 600.1G 1.4T 29% /vmfs/volumes/RHVM2Lun05
VMFS-5 2.0T 1.5T 500.7G 75% /vmfs/volumes/RHVMLun10
VMFS-5 2.0T 1.1T 907.5G 55% /vmfs/volumes/snap-318a75c7-RHVMLun02
VMFS-5 39.8G 30.5G 9.2G 77% /vmfs/volumes/RHVMISOs
VMFS-5 2.0T 1.0T 980.1G 52% /vmfs/volumes/RHVMLun03
VMFS-5 2.0T 1.5T 536.5G 74% /vmfs/volumes/RHVMLun07
VMFS-5 2.0T 1.7T 339.8G 83% /vmfs/volumes/RHVMLun06
VMFS-5 2.0T 1.6T 347.9G 83% /vmfs/volumes/RHVMLun08
VMFS-5 2.0T 622.7G 1.4T 31% /vmfs/volumes/RHVMLun04
VMFS-5 2.0T 770.4G 1.2T 38% /vmfs/volumes/RHVMLun02
VMFS-5 2.0T 1.4T 556.9G 73% /vmfs/volumes/RHVMLun14
VMFS-5 2.0T 1.4T 625.9G 69% /vmfs/volumes/RHVMLun11
VMFS-5 2.0T 1.3T 728.9G 64% /vmfs/volumes/RHVMLun09
VMFS-5 2.0T 1.3T 719.0G 65% /vmfs/volumes/RHVMLun05
VMFS-5 2.0T 1.4T 629.3G 69% /vmfs/volumes/RHVMLun01
VMFS-5 2.0T 1.7T 293.2G 86% /vmfs/volumes/RHVMLun13
VMFS-5 2.0T 1.5T 540.5G 73% /vmfs/volumes/RHVMLun12
VMFS-5 2.0T 1.5T 562.2G 73% /vmfs/volumes/RHVM2Lun04
VMFS-5 2.0T 218.6G 1.8T 11% /vmfs/volumes/RHVM2Lun06
VMFS-5 2.0T 437.0G 1.6T 21% /vmfs/volumes/RHVM2Lun07
VMFS-5 2.0T 473.7G 1.5T 23% /vmfs/volumes/RHVM2Lun08
VMFS-5 2.0T 180.3G 1.8T 9% /vmfs/volumes/RHVM2Lun09
VMFS-5 2.0T 68.3G 1.9T 3% /vmfs/volumes/RHVM2Lun10
VMFS-5 2.0T 180.0G 1.8T 9% /vmfs/volumes/RHVM2Lun11
VMFS-5 2.0T 56.6G 1.9T 3% /vmfs/volumes/RHVM2Lun12
VMFS-5 2.0T 126.1G 1.9T 6% /vmfs/volumes/RHVM2Lun13
VMFS-5 2.0T 136.2G 1.9T 7% /vmfs/volumes/RHVM2Lun14
VMFS-5 2.0T 155.4G 1.8T 8% /vmfs/volumes/RHVM2Lun15
VMFS-5 2.0T 543.1G 1.5T 27% /vmfs/volumes/RHVM2Lun16
VMFS-5 2.0T 35.2G 2.0T 2% /vmfs/volumes/RHVM2Lun17
VMFS-5 2.0T 269.7G 1.7T 13% /vmfs/volumes/RHVM2Lun18
VMFS-5 2.0T 153.4G 1.8T 7% /vmfs/volumes/RHVM2Lun19
vfat 249.7M 148.1M 101.6M 59% /vmfs/volumes/75bfa2e2-15f991dc-7b04-177c817047e4
vfat 249.7M 132.6M 117.1M 53% /vmfs/volumes/2d745cea-3d4f96fa-bf90-0f20ffec7c96
vfat 285.8M 208.4M 77.5M 73% /vmfs/volumes/54de1237-be403e34-3ea7-0025b5ee00bf
~ #
Where is / or /var? Is root's shell chrooted somehow?
Hi,
If you give:
vdf -h
A go, you should get the ramdisk file system information on your host. This includes root, etc, tmp, and so on.
Is that what you need?
I'm a relative noob when it comes to managing the OS level in ESXi, and in other ESXi environments I've been on, I'm used to seeing df report statistics on / and /var. However, I'm guessing this is what I'm looking for. I'd assume that in this instance, /var is just using space from root. Is that a correct assumption?
Ramdisk Size Used Available Use% Mounted on
root 32M 660K 31M 2% --
etc 28M 244K 27M 0% --
tmp 192M 6M 185M 3% --
hostdstats 1053M 7M 1045M 0% --
snmptraps 1M 0B 1M 0% --
However, if I run 'du -xsk /', I see / using 1GB...
~ # du -sxk /
1057448 /
That doesn't seem to add up to me...
Thanks
I'd try to explain it but I'd probably not do a very good job!
I found this article quite useful in understanding the partition layouts. I think it's a little old but its for v5! Sorry I can't help much more than that!
