jodykw82
Enthusiast
Enthusiast

vSphere Update Manager Error code 10

Jump to solution

Not sure why but I can't seem to patch some of my hosts.  I was able to patch it in the past and nothing has changed but I keep getting this error.  Anybody know how to resolve this?

Remediate entity

esxserver1.mydomain.com

The host returns esxupdate error code:10. Cannot create, write or read a

file as expected. Check the Update Manager log files and esxupdate log

files for more details.

DOMAIN\username

VCENTERHOST.mydomain.com

5/15/2015 12:02:33 PM

5/15/2015 12:02:33 PM

5/15/2015 12:02:38 PM

1 Solution

Accepted Solutions
RyanH84
Expert
Expert

Ok so you have three logs:

-rw-r--r--    1 root     root     186411672 May 15 21:04 ql_ima.log

-rw-r--r--    1 root     root       4894152 May 15 21:04 ql_ima_sdm.log

-rw-r--r--    1 root     root       5000089 May 15 06:44 ql_ima_sdm.log_old

That are filling up all of your space. Specifically the ql_ima.log.

To be honest, I'd be tempted to keep them by doing the following:

mv /tmp /tmp.old   (Keep your files by changing the temp directory to tmp.old

mkdir /tmp      (Creates a new /tmp directory)

services.sh     (restart host services and check it populates with some new files)

vdf -h     (Check that the /tmp is reporting as low)

Try your updates again!

------------------------------------------------------------------------------------------------------------------------------------------------- Regards, Ryan vExpert, VCP5, VCAP5-DCA, MCITP, VCE-CIAE, NPP4 @vRyanH http://vRyan.co.uk

View solution in original post

25 Replies
RyanH84
Expert
Expert

Hi,

Have you checked the update manager log and the esxupdate log for the extra information? You'll need to SSH onto the host (or SCP) and grab it from /var/log/esxupdate.log

If you can post them here then we can take a look and see!

I've found a blog that has stated that rebooting the host and trying to remediate again fixed the issue, are you able to try that?

------------------------------------------------------------------------------------------------------------------------------------------------- Regards, Ryan vExpert, VCP5, VCAP5-DCA, MCITP, VCE-CIAE, NPP4 @vRyanH http://vRyan.co.uk
0 Kudos
jodykw82
Enthusiast
Enthusiast

I've attached the file you mentioned....

0 Kudos
jodykw82
Enthusiast
Enthusiast

Also, wanted to mention this is vSphere 5.5.

0 Kudos

It seems as you don't have enought free space on your esxi server. have a look how much free space you have on /tmp

2015-05-15T16:42:59Z esxupdate: esxupdate: ERROR: An esxupdate error exception was caught:

2015-05-15T16:42:59Z esxupdate: esxupdate: ERROR: Traceback (most recent call last):

2015-05-15T16:42:59Z esxupdate: esxupdate: ERROR:   File "/usr/sbin/esxupdate", line 216, in main

2015-05-15T16:42:59Z esxupdate: esxupdate: ERROR:     cmd.Run()

2015-05-15T16:42:59Z esxupdate: esxupdate: ERROR:   File "/build/mts/release/bora-2302651/bora/build/esx/release/vmvisor/sys-boot/lib/python2.6/site-packages/vmware/esx5update/Cmdline.py", line 106, in Run

2015-05-15T16:42:59Z esxupdate: esxupdate: ERROR:   File "/build/mts/release/bora-2302651/bora/build/esx/release/vmvisor/sys-boot/lib/python2.6/site-packages/vmware/esximage/Transaction.py", line 69, in DownloadMetadatas

2015-05-15T16:42:59Z esxupdate: esxupdate: ERROR:   File "/build/mts/release/bora-2302651/bora/build/esx/release/vmvisor/sys-boot/lib/python2.6/site-packages/vmware/esximage/Downloader.py", line 268, in Get

2015-05-15T16:42:59Z esxupdate: esxupdate: ERROR:   File "/build/mts/release/bora-2302651/bora/build/esx/release/vmvisor/sys-boot/lib/python2.6/site-packages/vmware/esximage/Downloader.py", line 170, in _getfromurl

2015-05-15T16:42:59Z esxupdate: esxupdate: ERROR:   File "/build/mts/release/bora-2302651/bora/build/esx/release/vmvisor/sys-boot/lib/python2.6/site-packages/urlgrabber/grabber.py", line 927, in urlgrab

2015-05-15T16:42:59Z esxupdate: esxupdate: ERROR:   File "/build/mts/release/bora-2302651/bora/build/esx/release/vmvisor/sys-boot/lib/python2.6/site-packages/urlgrabber/grabber.py", line 845, in _retry

2015-05-15T16:42:59Z esxupdate: esxupdate: ERROR:   File "/build/mts/release/bora-2302651/bora/build/esx/release/vmvisor/sys-boot/lib/python2.6/site-packages/urlgrabber/grabber.py", line 915, in retryfunc

2015-05-15T16:42:59Z esxupdate: esxupdate: ERROR:   File "/build/mts/release/bora-2302651/bora/build/esx/release/vmvisor/sys-boot/lib/python2.6/site-packages/urlgrabber/grabber.py", line 1219, in _do_grab

2015-05-15T16:42:59Z esxupdate: esxupdate: ERROR: IOError: [Errno 28] No space left on device

------------------------------------------------------------------------------- If you found this or any other answer helpful, please consider to award points. (use Correct or Helpful buttons) Regards from Switzerland, B. Fernandez http://vpxa.info/
0 Kudos
RyanH84
Expert
Expert

Ok, well after taking a look at the log I can see that there is the following:

2015-05-15T16:47:36Z esxupdate: esxupdate: ERROR: IOError: [Errno 28] No space left on device

My first check now would be to look at the host and see what the free space is on the service console partitions. On the same host you got the log from, run the command:

df -h

Can you post the output here?

------------------------------------------------------------------------------------------------------------------------------------------------- Regards, Ryan vExpert, VCP5, VCAP5-DCA, MCITP, VCE-CIAE, NPP4 @vRyanH http://vRyan.co.uk
0 Kudos
jodykw82
Enthusiast
Enthusiast

Which volume does it need to write to?

~ # df -h

Filesystem   Size   Used Available Use% Mounted on

VMFS-5       9.8G 881.0M      8.9G   9% /vmfs/volumes/SDR-vSphere-SRM-PH  (shared storage)

VMFS-5       1.8T  21.3G      1.8T   1% /vmfs/volumes/SDR-ESX01R-LocalDisk

VMFS-5     499.8G  61.5G    438.2G  12% /vmfs/volumes/SDR-ESX-ISOs (shared storage)

VMFS-5       4.0T  69.1G      3.9T   2% /vmfs/volumes/SDR-ESX-VMFS02-NoReplication (shared storage)

VMFS-5       4.0T   1.8T      2.2T  45% /vmfs/volumes/SDR-ESX-VMFS01-NoReplication (shared storage)

vfat       249.7M 164.3M     85.4M  66% /vmfs/volumes/5c421de1-7c870757-1c93-6142de868aa5

vfat       249.7M 165.4M     84.3M  66% /vmfs/volumes/281910e1-03b7cc92-2a9b-4fdc1c144506

vfat       285.8M 193.4M     92.4M  68% /vmfs/volumes/53d0f05d-3c8bbbaa-90f5-c81f66ea53fe

~ #

0 Kudos
RyanH84
Expert
Expert

Hi,

Can you also try:

df -h /tmp


It looks like your vfat partitions are very small. Can you also output:


ls -al /scratch


My gut feeling is that your host doesn't have enough space for the patches to be downloaded to, to then be installed from. You can specify a way of using another datastore for your scratch partition. You could check this KB article which describes the process of changing the location of your scratch. You have a nice big old local VMFS volume that you could use instead. A host reboot would be needed though!

------------------------------------------------------------------------------------------------------------------------------------------------- Regards, Ryan vExpert, VCP5, VCAP5-DCA, MCITP, VCE-CIAE, NPP4 @vRyanH http://vRyan.co.uk
0 Kudos
jodykw82
Enthusiast
Enthusiast

df -h /temp came back with the same information as before....

The second command came back with this....

~ # ls -al /scratch

lrwxrwxrwx    1 root     root            57 Feb 18 16:27 /scratch -> /vmfs/volumes/54323d86-26c4820f-b71e-c81f66ea53fe/.locker

~ #

0 Kudos
RyanH84
Expert
Expert

I updated my post above with a KB article, you got back to me too quick Smiley Happy

Specifically the part about running out of space and possibly changing your scratch location to a local VMFS volume on the host.

------------------------------------------------------------------------------------------------------------------------------------------------- Regards, Ryan vExpert, VCP5, VCAP5-DCA, MCITP, VCE-CIAE, NPP4 @vRyanH http://vRyan.co.uk
0 Kudos
jodykw82
Enthusiast
Enthusiast

So I checked the scratch config and it is going here....

/vmfs/volumes/54323d86-26c4820f-b71e-c81f66ea53fe/.locker   (this is a local datastore btw)

There is 1.8 TB of free space on that Datastore....  Could it be trying to write somewhere else?

0 Kudos
RyanH84
Expert
Expert

Are you saying that 54323d86-26c4820f-b71e-c81f66ea53fe is the VMFS volume SDR-ESX01R-LocalDisk?

If you:

cd /var/tmp


Does that put you in: /vmfs/volumes/54323d86-26c4820f-b71e-c81f66ea53fe/var/tmp

------------------------------------------------------------------------------------------------------------------------------------------------- Regards, Ryan vExpert, VCP5, VCAP5-DCA, MCITP, VCE-CIAE, NPP4 @vRyanH http://vRyan.co.uk
0 Kudos
jodykw82
Enthusiast
Enthusiast

~ # cd /var/tmp

/vmfs/volumes/54323d86-26c4820f-b71e-c81f66ea53fe/.locker/var/tmp #

/vmfs/volumes/54323d86-26c4820f-b71e-c81f66ea53fe/

is

SDR-ESX01R-LocalDisk

0 Kudos
RyanH84
Expert
Expert

Ok, now what if you try:

vdf -h


I'm interested in the bottom part of the output, Ramdisk, root, tmp, etc.

------------------------------------------------------------------------------------------------------------------------------------------------- Regards, Ryan vExpert, VCP5, VCAP5-DCA, MCITP, VCE-CIAE, NPP4 @vRyanH http://vRyan.co.uk
0 Kudos
jodykw82
Enthusiast
Enthusiast

/vmfs/volumes/54323d86-26c4820f-b71e-c81f66ea53fe/.locker/var/tmp # vdf -h

Tardisk                  Space      Used

sb.v00                    148M      148M

s.v00                     295M      295M

misc_cni.v00               24K       21K

net_bnx2.v00              300K      298K

net_bnx2.v01                1M        1M

net_cnic.v00              136K      132K

net_tg3.v00               292K      289K

scsi_bnx.v00              264K      262K

scsi_bnx.v01              196K      192K

dell_eql.v00               11M       11M

dell_eql.v01               16K       13K

dell_eql.v02              136K      133K

elxnet.v00                280K      276K

ima_be2i.v00                1M        1M

lpfc.v00                    1M        1M

scsi_be2.v00              664K      660K

net_ixgb.v00              432K      429K

scsi_mpt.v00              464K      461K

ima_qla4.v00                4M        4M

net_qlcn.v00                1M        1M

qlnative.v00                2M        2M

scsi_qla.v00              508K      504K

dvfilter.v00              688K      687K

ata_pata.v00               40K       39K

ata_pata.v01               28K       27K

ata_pata.v02               32K       30K

ata_pata.v03               32K       30K

ata_pata.v04               36K       35K

ata_pata.v05               32K       31K

ata_pata.v06               28K       27K

ata_pata.v07               36K       32K

block_cc.v00               80K       77K

ehci_ehc.v00               92K       91K

epsec_mu.v00              300K      297K

weaselin.t00               14M       14M

esx_dvfi.v00              404K      401K

xlibs.v00                   1M        1M

ipmi_ipm.v00               40K       38K

ipmi_ipm.v01               88K       87K

ipmi_ipm.v02              100K       97K

lsi_mr3.v00               180K      178K

lsi_msgp.v00              364K      363K

misc_dri.v00                4M        4M

mtip32xx.v00              180K      176K

net_e100.v00              288K      286K

net_e100.v01              232K      230K

net_enic.v00              132K      130K

net_forc.v00              120K      117K

net_igb.v00               296K      293K

net_mlx4.v00              332K      328K

net_mlx4.v01              224K      220K

net_nx_n.v00                1M        1M

net_qlge.v00              340K      336K

net_vmxn.v00              100K       98K

ohci_usb.v00               60K       58K

rste.v00                  740K      737K

sata_ahc.v00               80K       77K

sata_ata.v00               56K       52K

sata_sat.v00               60K       59K

sata_sat.v01               44K       40K

sata_sat.v02               44K       40K

sata_sat.v03               36K       32K

sata_sat.v04               32K       28K

scsi_aac.v00              168K      164K

scsi_adp.v00              412K      409K

scsi_aic.v00              280K      278K

scsi_fni.v00              160K      157K

scsi_hps.v00              164K      160K

scsi_ips.v00               96K       93K

scsi_meg.v00               92K       91K

scsi_meg.v01              164K      160K

scsi_meg.v02               88K       87K

scsi_mpt.v01              500K      497K

scsi_mpt.v02              420K      418K

scsi_qla.v01                1M        1M

uhci_usb.v00               60K       57K

xhci_xhc.v00              168K      164K

dell_con.v00                4K        3K

xorg.v00                    3M        3M

imgdb.tgz                 372K      370K

state.tgz                  40K       37K

-----

Ramdisk                   Size      Used Available Use% Mounted on

root                       32M      660K       31M   2% --

etc                        28M      324K       27M   1% --

tmp                       192M      192M        0B 100% --

hostdstats                803M        5M      797M   0% --

snmptraps                   1M        0B        1M   0% --

0 Kudos
RyanH84
Expert
Expert

Hi!


That is a bingo! Smiley Happy

tmp                   192M  192M    0B 100% --

Your tmp directory is full! I'm guessing you are applying a fair few patches?

If you do:

ls -la /tmp


What do you get?

------------------------------------------------------------------------------------------------------------------------------------------------- Regards, Ryan vExpert, VCP5, VCAP5-DCA, MCITP, VCE-CIAE, NPP4 @vRyanH http://vRyan.co.uk
0 Kudos
jodykw82
Enthusiast
Enthusiast

This is what it comes back with.....

~ # ls -la /tmp

total 191720

drwxrwxrwt    1 root     root           512 May 15 21:04 .

drwxr-xr-x    1 root     root           512 May 15 16:50 ..

-rw-------    1 root     root            36 May 15 21:05 probe.session

-rw-r--r--    1 root     root     186411672 May 15 21:04 ql_ima.log

-rw-r--r--    1 root     root       4894152 May 15 21:04 ql_ima_sdm.log

-rw-r--r--    1 root     root       5000089 May 15 06:44 ql_ima_sdm.log_old

-rw-r--r--    1 root     root             0 May 15 21:04 snmpd-cvtcimsnmp.xml

-rw-------    1 root     root             0 May 15 19:54 uXypwi

~ #

How can I find out what volume that tmp directory is on?

0 Kudos
RyanH84
Expert
Expert

Ok so you have three logs:

-rw-r--r--    1 root     root     186411672 May 15 21:04 ql_ima.log

-rw-r--r--    1 root     root       4894152 May 15 21:04 ql_ima_sdm.log

-rw-r--r--    1 root     root       5000089 May 15 06:44 ql_ima_sdm.log_old

That are filling up all of your space. Specifically the ql_ima.log.

To be honest, I'd be tempted to keep them by doing the following:

mv /tmp /tmp.old   (Keep your files by changing the temp directory to tmp.old

mkdir /tmp      (Creates a new /tmp directory)

services.sh     (restart host services and check it populates with some new files)

vdf -h     (Check that the /tmp is reporting as low)

Try your updates again!

------------------------------------------------------------------------------------------------------------------------------------------------- Regards, Ryan vExpert, VCP5, VCAP5-DCA, MCITP, VCE-CIAE, NPP4 @vRyanH http://vRyan.co.uk

View solution in original post

jodykw82
Enthusiast
Enthusiast

That makes sense.  Let me try that and see what happens.

0 Kudos
jodykw82
Enthusiast
Enthusiast

I renamed tmp to tmp.old and I created a new tmp directory and restarted services.

I'm running the patches now and so far so good, it's actually patching now.

However, I'm still a little hazy on where that tmp directory actually is?  How can I find that out?

0 Kudos