Contributor
Contributor

Again: The ramdisk 'root' is full. As a result, the file /var/run/vmware/tickets/vmtck could not be written.

Hi,

i'm on 5.0 ESXI Build 1024429.

I already defined a new scratch location on a local disk:

/var/log # cat /etc/vmware/locker.conf

/vmfs/volumes/4f360e13-e321f854-e5d5-5404a6a6930b/.locker-ESX

The Scratchconfig.currentscratchlocation is set to the above directory.

I have already searched around, but did not find any other helpful tips.

Uptime = 43 days

What could be wrong?

THANKS!

I have enough space on the disks:

/vmfs/volumes # ls -la

drwxr-xr-x    1 root     root                512 Jun 11 21:16 .

drwxr-xr-x    1 root     root                512 Apr 29 07:42 ..

drwxrwxrwx    1 root     root               4096 Jun  7 18:52 3eb808ad-861759cb

drwxr-xr-x    1 root     root                  8 Jan  1  1970 4f360dda-fe173332-dce7-5404a6a6930b

drwxr-xr-t    1 root     root               2520 Apr 27 22:08 4f360e13-e321f854-e5d5-5404a6a6930b

drwxr-xr-x    1 root     root                  8 Jan  1  1970 4f360e15-fab79dbc-67ac-5404a6a6930b

drwxr-xr-t    1 root     root               2100 May  1 10:13 511c920e-9f0060d2-476e-5404a6a6930b

drwxr-xr-x    1 root     root                  8 Jan  1  1970 53d79975-9240cbf4-53d4-8b4ec3ff6ab6

drwxr-xr-x    1 root     root                  8 Jan  1  1970 7d555e71-a16c4a3b-8e37-915dab08b1ee

lrwxr-xr-x    1 root     root                 17 Jun 11 21:16 Backup -> d4dd01b9-5ee8a945

lrwxr-xr-x    1 root     root                 17 Jun 11 21:16 ISO -> 3eb808ad-861759cb

drwxrwxrwx    1 root     root               4096 Jun 11 02:41 d4dd01b9-5ee8a945

lrwxr-xr-x    1 root     root                 35 Jun 11 21:16 datastore1 -> 4f360e13-e321f854-e5d5-5404a6a6930b

lrwxr-xr-x    1 root     root                 35 Jun 11 21:16 md0 -> 511c920e-9f0060d2-476e-5404a6a6930b

Filesystem   Size   Used Available Use% Mounted on

NFS          1.8T   1.7T     92.5G  95% /vmfs/volumes/Backup

NFS          1.8T   1.7T     92.5G  95% /vmfs/volumes/ISO

VMFS-5     926.5G 571.2G    355.3G  62% /vmfs/volumes/datastore1

VMFS-5     930.8G 482.2G    448.6G  52% /vmfs/volumes/md0

vfat         4.0G  15.6M      4.0G   0% /vmfs/volumes/4f360e15-fab79dbc-67ac-5404a6a6930b

vfat       249.7M 128.4M    121.3M  51% /vmfs/volumes/7d555e71-a16c4a3b-8e37-915dab08b1ee

vfat       249.7M 128.4M    121.3M  51% /vmfs/volumes/53d79975-9240cbf4-53d4-8b4ec3ff6ab6

vfat       285.8M 142.9M    142.9M  50% /vmfs/volumes/4f360dda-fe173332-dce7-5404a6a6930b

/vmfs/volumes/4f360e13-e321f854-e5d5-5404a6a6930b/.locker-ESX # du -cksh

35.1M   .

35.1M   total

/vmfs/volumes/4f360e13-e321f854-e5d5-5404a6a6930b/.locker-ESX #

6 Replies
Immortal
Immortal

To work around this issue when you do not want to upgrade:

  1. Connect to the ESXi host using SSH. For more information, see Using ESXi Shell in ESXi 5.x (2004746).
  2. Check if SNMP is creating too many .trp files in the /var/spool/snmp directory on the ESXi host by running the command:

    ls /var/spool/snmp | wc -l

    Note: If the output indicates that the value is 2000 or more, then this may be causing the full inodes.

  3. Delete the .trp files in the /var/spool/snmp/ directory by running the commands:

    # cd /var/spool/snmp
    # for i in $(ls | grep trp); do rm -f $i;done


  4. Change directory to /etc/vmware/ and back up the snmp.xml file by running the commands:

    # cd /etc/vmware
    # mv snmp.xml snmp.xml.bkup


  5. Create a new file named snmp.xml and open it using a text editor. For more information, see Editing files on an ESX host using vi or nano (1020302)
  6. Copy and paste these contents to the file:

    <?xml version="1.0" encoding="ISO-8859-1"?>
    <config>
    <snmpSettings><enable>false</enable><port>161</port><syscontact></syscontact><syslocation></syslocation>
    <EnvEventSource>indications</EnvEventSource><communities></communities><loglevel>info</loglevel><authProtocol></authProtocol><privProtocol></privProtocol></snmpSettings>
    </config>


  7. Save and close the file. 
  8. Reconfigure SNMP on the affected host by running the command:

    # esxcli system snmp set –-enable=true

  9. To confirm the SNMP services are running normally again, run the command:

    # esxcli system snmp get

    Here is an example of the output:
    /etc/vmware # esxcli system snmp get
       Authentication:   Communities:   Enable: true   Engineid: 00000063000000a10a0121cf   Hwsrc: indications   Loglevel: info   Notraps:   Port: 161   Privacy:   Remoteusers:   Syscontact:   Syslocation:   Targets:   Users:   V3targets:

To ensure that the issue does not recur, you can temporarily disable snmpd to stop logging. To stop the snmpd service, run this command:

# /etc/init.d/snmpd stop

Contributor
Contributor

Hello,

thanks for your answer.

But i don´t use snmp.

I have only cron in my spool dir:

/var/spool # ls -la

drwxr-xr-x    1 root     root                512 Apr 29 07:42 .

drwxr-xr-x    1 root     root                512 Apr 29 07:43 ..

drwxr-xr-x    1 root     root                512 Apr 29 07:42 cron

Also i get the message:

/var/spool # esxcli system snmp get

Error: Unknown command or namespace system snmp get

esxcli system snmp set --enable=true

Error: Unknown command or namespace system snmp set

0 Kudos
Contributor
Contributor

Hi again,

again, after 83 days uptime, i have the same error:

The ramdisk 'root' is full.  As a result, the file

/var/run/vmware/tickets/vmtck-524f3019-d48d-35 could not be written.

error

21-07-2013 11:03:33

dmesg show me:

2013-07-21T09:06:57.393Z cpu3:4348)WARNING: VisorFSRam: 227: Cannot extend visorfs file /var/run/vmware/tickets/vmtck-5268173f-7917-2d because its ramdisk (root) is full.

2013-07-21T09:08:35.227Z cpu2:3238)WARNING: VisorFSRam: 227: Cannot extend visorfs file /var/run/vmware/tickets/vmtck-5278c72a-12e6-2c because its ramdisk (root) is full.

2013-07-21T09:12:17.652Z cpu3:4987442)WARNING: VisorFSRam: 227: Cannot extend visorfs file /var/run/vmware/tickets/vmtck-60a33753-4feb-43 because its ramdisk (root) is full.

2013-07-21T09:13:37.233Z cpu2:4348)WARNING: VisorFSRam: 227: Cannot extend visorfs file /var/run/vmware/tickets/vmtck-523e9b0d-4697-13 because its ramdisk (root) is full.

But the disks are not full:

/var/run/vmware/tickets # df -h

Filesystem   Size   Used Available Use% Mounted on

NFS          1.8T   1.7T     44.8G  98% /vmfs/volumes/Backup

NFS          1.8T   1.7T     44.8G  98% /vmfs/volumes/ISO

VMFS-5     926.5G 571.2G    355.3G  62% /vmfs/volumes/datastore1

VMFS-5     930.8G 482.2G    448.6G  52% /vmfs/volumes/md0

vfat         4.0G  15.6M      4.0G   0% /vmfs/volumes/4f360e15-fab79dbc-67ac-5404a6a6930b

vfat       249.7M 128.4M    121.3M  51% /vmfs/volumes/7d555e71-a16c4a3b-8e37-915dab08b1ee

vfat       249.7M 128.4M    121.3M  51% /vmfs/volumes/53d79975-9240cbf4-53d4-8b4ec3ff6ab6

vfat       285.8M 142.9M    142.9M  50% /vmfs/volumes/4f360dda-fe173332-dce7-5404a6a6930b

/var/run/vmware/tickets #

SNMP is disabled.

The locker files are on datastore1

I really don't know where to start over.

Thanks

0 Kudos
Commander
Commander

Hi,

What is the serveur hardware ? if it is a HP Gen 8 , they have an issue on this . VMware ESXi Server&amp;nbsp;-&amp;nbsp; HP ProLiant Gen8 Servers - ESXi 5: The /var/log/hpHelper.log...

Regards,

Julien

Regards, J.Varela http://vthink.fr
Enthusiast
Enthusiast

This patch (hp-esxi5.0uX-bundle-1.3.3-1) has been applied to ESXi 5.0 hosts. Our issue was the hphelper.log file grew in size and occupied /var/log. After applying the patch, it got symlinked to /scratch and now the issue is gone.

0 Kudos
Hot Shot
Hot Shot

In addition to what the others have said, if you have more than one host in a cluster, you can compare the file systems to help determine what is filling up the root ramdisk. Also, to show the ramdisk utilization, use vdf -h. You can use du -sh to look at file growth over time or to compare with other hosts in the cluster.

0 Kudos