VMware Cloud Community
ph1b3r
Contributor
Contributor
Jump to solution

Installed ESXi on USB/Flash but swap does not seem to have been placed on ramdisk

Hello,

I have read many times that when you install ESXi 5.1 on to flash media it creates a scratch space on a 4GB ramdisk in /tmp/scratch so that the USB drive itself is not used for temporary data to prolong the life of the USB drive.  However this does not seem to be my experience and I was wondering if someone can help me confirm that my scratch drive is in fact on a ramdisk or whether it is on the USB thumbdrive.  I should mention as well that during installation ESXi did correctly identify the disk type as "flash" and the disk as a "Generic USB drive".

After installation I noticed two things:

1. Logs of user log-in activities do survive a reboot (which I do not believe they should since the ram disk is dissolved upon a reboot)

2. There is no directory with the path /tmp/scratch

Unfortunately I cannot directly identify where the scratch partition is (whether on USB or in RAM) because I do not have an "/etc/fstab" file to look at (though I do under ESXi 5.0 on a hard-disk based install?) and there does not appear to be a "mount" command that I can use to look at existing mounts either.

Can anyone tell me what I can do to verify that the directory used for temporary files is in fact a ramdisk and not on the USB drive itself?  Hours of searching have only turned up forum posts and documentation stating unquestionably that the scratch file will use "/tmp/scratch", but that just isn't the case.

Thanks in advance for all of your help.

Reply
0 Kudos
1 Solution

Accepted Solutions
a_p_
Leadership
Leadership
Jump to solution

As you can see from the output

~snip~

drwxr-xr-t    1 root     root          1260 Dec 28 23:08 50de22a5-f4afafa0-f7b5-001517197ad8
drwxr-xr-x    1 root     root             8 Jan  1  1970 50df4c30-82b86e61-8522-001517197ad8
lrwxr-xr-x    1 root     root            35 Jan  1 22:21 640GB_WD_Black_AALS-1 -> 50de22a5-f4afafa0-f7b5-001517197ad8

~snip~

the UUID is the one of an existing datastore "640GB_WD_Black_AALS-1".

This proves that with an already existing datastore, a ".locker" folder (i.e. a ScratchLocation) is automatically created.

André

View solution in original post

Reply
0 Kudos
15 Replies
a_p_
Leadership
Leadership
Jump to solution

Welcome to the Community,

it is correct that logs are located on the RAM disk when you install ESXi on an USB device, but space is limited and no scratch partition is created. This is why you should create a persistent scratch location as shown in http://kb.vmware.com/kb/1033696 in this case.

André

Reply
0 Kudos
ph1b3r
Contributor
Contributor
Jump to solution

It is very interesting that no scratch partition is made, that's a new one to me.  But my question remains - how do I verify that a ramdisk was actually created and used?

Although as you say logs should be placed onto the ramdisk by default - that's largely why it was created - when I reboot I can still read the logs that were generated before the reboot.  As I understand it, those logs should have been erased during reboot if they were indeed placed on a ramdisk.

If I am correct in my understanding about how the logs should disappear it would suggest there is something very wrong with this feature that could be a problem for many people.  I'm assuming I am not correct because I am by no means an expert in VMWare products, but I don't know how to prove that I am and I'm hoping someone can help me improve my understanding in this area.

Edit: In addition, when installed on flash isn't ESXi supposed to warn you in the vSphere client that logs are not being saved (unless you change the scratch directory of course)?  ESXi does not provide me with any such warnings.

Reply
0 Kudos
a_p_
Leadership
Leadership
Jump to solution

What do you see when you run the following commands from the console (or SSH)?

  • vdf -h
  • df -h
  • ls -lh /dev/disks

Here's an example of a HDD installation (with scratch partition):

~ # vdf -h

~snip~

-----
Ramdisk                   Size      Used Available Use% Mounted on
root                       32M      436K       31M   1% --
etc                        28M      140K       27M   0% --
tmp                       192M        4K      191M   0% --
hostdstats                 81M        1M       79M   1% --

~ # df -h
Filesystem   Size   Used Available Use% Mounted on
VMFS-5      35.0G 971.0M     34.1G   3% /vmfs/volumes/datastore1
vfat         4.0G   5.8M      4.0G   0% /vmfs/volumes/4feba845-02418b29-14d6-000c2976faee
vfat       249.7M 130.1M    119.6M  52% /vmfs/volumes/86d26c76-4e38d309-631a-8844386b0a75
vfat       249.7M   8.0K    249.7M   0% /vmfs/volumes/131d727f-4c64967b-3c43-09ad80b0fe4c
vfat       285.8M 202.0M     83.8M  71% /vmfs/volumes/4feba83e-d4d44475-bb0d-000c2976faee

~ # ls -lh /dev/disks
-rw-------    1 root     root       40.0G Jan  1 19:18 mpx.vmhba1:C0:T0:L0
-rw-------    1 root     root        4.0M Jan  1 19:18 mpx.vmhba1:C0:T0:L0:1
-rw-------    1 root     root        4.0G Jan  1 19:18 mpx.vmhba1:C0:T0:L0:2
-rw-------    1 root     root       35.1G Jan  1 19:18 mpx.vmhba1:C0:T0:L0:3
-rw-------    1 root     root      250.0M Jan  1 19:18 mpx.vmhba1:C0:T0:L0:5
-rw-------    1 root     root      250.0M Jan  1 19:18 mpx.vmhba1:C0:T0:L0:6
-rw-------    1 root     root      110.0M Jan  1 19:18 mpx.vmhba1:C0:T0:L0:7
-rw-------    1 root     root      286.0M Jan  1 19:18 mpx.vmhba1:C0:T0:L0:8

André

Reply
0 Kudos
ph1b3r
Contributor
Contributor
Jump to solution

In the vSphere client, the ScratchConfig.ConfiguredScratchLocation is /vmfs/volumes/50de22a5-f4afafa0-f7b5-001517197ad8/.locker

Also, just to let you know I have installed ESXi on a 16GB flash drive.

Here is what I see with those commands:

~ # vdf -h
<snip>
-----
Ramdisk                   Size      Used Available Use% Mounted on
root                       32M      412K       31M   1% --
etc                        28M      156K       27M   0% --
tmp                       192M        4K      191M   0% --
hostdstats                249M        1M      247M   0% --

~ # df -h
Filesystem   Size   Used Available Use% Mounted on
VMFS-5     596.0G 987.0M    595.0G   0% /vmfs/volumes/640GB_WD_Black_AALS-1
VMFS-5     931.2G 975.0M    930.3G   0% /vmfs/volumes/1TB_WD_Black
vfat       249.7M 130.2M    119.6M  52% /vmfs/volumes/8c231eec-aa4ac1c8-4b1d-14cf7ae5845e
vfat       249.7M   8.0K    249.7M   0% /vmfs/volumes/7cb258e0-f4980796-4918-4766c02ae6a8
vfat       285.8M 201.9M     83.9M  71% /vmfs/volumes/50df4c30-82b86e61-8522-001517197ad8

~ # ls -lh /dev/disks
-rw-------    1 root     root       15.0G Jan  1 17:50 mpx.vmhba32:C0:T0:L0
-rw-------    1 root     root        4.0M Jan  1 17:50 mpx.vmhba32:C0:T0:L0:1
-rw-------    1 root     root      250.0M Jan  1 17:50 mpx.vmhba32:C0:T0:L0:5
-rw-------    1 root     root      250.0M Jan  1 17:50 mpx.vmhba32:C0:T0:L0:6
-rw-------    1 root     root      110.0M Jan  1 17:50 mpx.vmhba32:C0:T0:L0:7
-rw-------    1 root     root      286.0M Jan  1 17:50 mpx.vmhba32:C0:T0:L0:8
-rw-------    1 root     root      931.5G Jan  1 17:50 t10.ATA_____WDC_WD1002FAEX2D00Z3A0________________________WD2DWCATR1954950
-rw-------    1 root     root      931.5G Jan  1 17:50 t10.ATA_____WDC_WD1002FAEX2D00Z3A0________________________WD2DWCATR1954950:1
-rw-------    1 root     root      596.2G Jan  1 17:50 t10.ATA_____WDC_WD6401AALS2D00L3B2________________________WD2DWMASY3506632
-rw-------    1 root     root      596.2G Jan  1 17:50 t10.ATA_____WDC_WD6401AALS2D00L3B2________________________WD2DWMASY3506632:1
lrwxrwxrwx    1 root     root          20 Jan  1 17:50 vml.0000000000766d68626133323a303a30 -> mpx.vmhba32:C0:T0:L0
lrwxrwxrwx    1 root     root          22 Jan  1 17:50 vml.0000000000766d68626133323a303a30:1 -> mpx.vmhba32:C0:T0:L0:1
lrwxrwxrwx    1 root     root          22 Jan  1 17:50 vml.0000000000766d68626133323a303a30:5 -> mpx.vmhba32:C0:T0:L0:5
lrwxrwxrwx    1 root     root          22 Jan  1 17:50 vml.0000000000766d68626133323a303a30:6 -> mpx.vmhba32:C0:T0:L0:6
lrwxrwxrwx    1 root     root          22 Jan  1 17:50 vml.0000000000766d68626133323a303a30:7 -> mpx.vmhba32:C0:T0:L0:7
lrwxrwxrwx    1 root     root          22 Jan  1 17:50 vml.0000000000766d68626133323a303a30:8 -> mpx.vmhba32:C0:T0:L0:8
lrwxrwxrwx    1 root     root          74 Jan  1 17:50 vml.0100000000202020202057442d574341545231393534393530574443205744 -> t10.ATA_____WDC_WD1002FAEX2D00Z3A0________________________WD2DWCATR1954950
lrwxrwxrwx    1 root     root          76 Jan  1 17:50 vml.0100000000202020202057442d574341545231393534393530574443205744:1 -> t10.ATA_____WDC_WD1002FAEX2D00Z3A0________________________WD2DWCATR1954950:1
lrwxrwxrwx    1 root     root          74 Jan  1 17:50 vml.0100000000202020202057442d574d41535933353036363332574443205744 -> t10.ATA_____WDC_WD6401AALS2D00L3B2________________________WD2DWMASY3506632
lrwxrwxrwx    1 root     root          76 Jan  1 17:50 vml.0100000000202020202057442d574d41535933353036363332574443205744:1 -> t10.ATA_____WDC_WD6401AALS2D00L3B2________________________WD2DWMASY3506632:1

Reply
0 Kudos
a_p_
Leadership
Leadership
Jump to solution

From the output of the commands you can see that a.) a ramdisk is created, b.) no scratch partition (4GB) exists and c.) only ~1GB of the USB stick's capacity is used by ESXi. For details about ESXi partitioning see e.g. http://rickardnobel.se/esxi-5-0-partitions/

Now the interesting part is to find out to which partition the ScratchLocation points to!? Btw. the "Store" partition's UUID (the one you highlighted) does not match the one for the ScratchLocation.

I have to admit I never took a closer look at this default location, but changed it immediately after installation. What do you see running

ls -la /vmfs/volumes/50de22a5-f4afafa0-f7b5-001517197ad8

André

Reply
0 Kudos
a_p_
Leadership
Leadership
Jump to solution

As I was interested in how this exactly looks like, I installed ESXi on an USB stick. The default scratch location is "/tmp/scratch". I'm still unsure why you still see the logs after a reboot though!?

André

Reply
0 Kudos
ph1b3r
Contributor
Contributor
Jump to solution

You're right, it doesn't.  I checked the first few characters and the last bunch after the dash and those matched up, but the full string doesn't.  That's a lesson on why you shouldn't take shortcuts and make assumptions for me.

Here is the output you were looking for:

/tmp # ls -la /vmfs/volumes/50de22a5-f4afafa0-f7b5-001517197ad8
drwxr-xr-t    1 root     root          1260 Dec 28 23:08 .
drwxr-xr-x    1 root     root           512 Jan  1 20:18 ..
-r--------    1 root     root       3211264 Dec 28 22:52 .fbb.sf
-r--------    1 root     root     267026432 Dec 28 22:52 .fdc.sf
drwxr-xr-x    1 root     root           840 Dec 28 23:08 .locker
-r--------    1 root     root       1179648 Dec 28 22:52 .pb2.sf
-r--------    1 root     root     268435456 Dec 28 22:52 .pbc.sf
-r--------    1 root     root     262733824 Dec 28 22:52 .sbc.sf
-r--------    1 root     root       4194304 Dec 28 22:52 .vh.sf

Reply
0 Kudos
ph1b3r
Contributor
Contributor
Jump to solution

That is what I have read everywhere - that the default location is /tmp/scratch - and that's why I was very confused when that's not what my default was.  To be sure that I didn't miss something I have installed ESXi 5.1 twice now, and the location that I gave appears to be the same I have received each time.  What seems confusing to me is that each time I have installed it ESXi has correctly identified the destination disk as a USB drive with a "flash" media type, so I'm not sure why my install would be different from any other.

Reply
0 Kudos
a_p_
Leadership
Leadership
Jump to solution

What could make a difference is whether there's already a datastore available on the host!? I did a fresh installation without any partitions already created on the HDD.

André

Reply
0 Kudos
ph1b3r
Contributor
Contributor
Jump to solution

My first install was on the same system but without any hard drives and it had the same effect.  I added the two hard drives noted for the second install because I was wanted to make sure that the USB drive was being detected differently than a traditional HDD.

Between the two installs I accidentally unplugged the USB drive while it was formatting and had to take it to another machine to remove all partitions and data as it was not being recognized anymore due to having corrupted partitions, so I know that at least for the second install it was completely blank.

Reply
0 Kudos
a_p_
Leadership
Leadership
Jump to solution

At least from the files in the folder (your previous post) this looks like a VMFS datastore. Do you see the UUID associated with a datastore running e.g.

ls -la /vmfs/volumes

Anyway, I'm running a second test installation at the moment. This time with a VMFS datastore already created on the HDD.

André

Reply
0 Kudos
a_p_
Leadership
Leadership
Jump to solution

In the new installation with an existing datastore on the HDD, the ".locker" folder was created on the existing datastore. This explains why the logs survived the reboot. You can easily see the location of the "scratch" folder by running

ls -al /

which shows the link to the currently used folder, either "/tmp/scratch" for a "blank" installation or "/vmfs/volumes/<UUID>" with an already existing datastore.


André

Reply
0 Kudos
ph1b3r
Contributor
Contributor
Jump to solution

I just tried installing on a different USB stick with more or less the same results.  When I rebooted using the original USB stick I opened this thread for I do see that the scratch directory has changed somewhat.  It is now showing as:  /vmfs/volumes/50de22a5-f4afafa0-f7b5-001517197ad8/.locker

~ # ls -la /vmfs/volumes/50de22a5-f4afafa0-f7b5-001517197ad8
drwxr-xr-t    1 root     root          1260 Dec 28 23:08 .
drwxr-xr-x    1 root     root           512 Jan  1 22:27 ..
-r--------    1 root     root       3211264 Dec 28 22:52 .fbb.sf
-r--------    1 root     root     267026432 Dec 28 22:52 .fdc.sf
drwxr-xr-x    1 root     root           840 Dec 28 23:08 .locker
-r--------    1 root     root       1179648 Dec 28 22:52 .pb2.sf
-r--------    1 root     root     268435456 Dec 28 22:52 .pbc.sf
-r--------    1 root     root     262733824 Dec 28 22:52 .sbc.sf
-r--------    1 root     root       4194304 Dec 28 22:52 .vh.sf

~ # ls -al /vmfs/volumes
drwxr-xr-x    1 root     root           512 Jan  1 22:21 .
drwxr-xr-x    1 root     root           512 Jan  1 22:10 ..
lrwxr-xr-x    1 root     root            35 Jan  1 22:21 1TB_WD_Black -> 50de2239-10698620-1956-001517197ad8
drwxr-xr-t    1 root     root          1260 Jan  1 21:28 50de2239-10698620-1956-001517197ad8
drwxr-xr-t    1 root     root          1260 Dec 28 23:08 50de22a5-f4afafa0-f7b5-001517197ad8
drwxr-xr-x    1 root     root             8 Jan  1  1970 50df4c30-82b86e61-8522-001517197ad8
lrwxr-xr-x    1 root     root            35 Jan  1 22:21 640GB_WD_Black_AALS-1 -> 50de22a5-f4afafa0-f7b5-001517197ad8
drwxr-xr-x    1 root     root             8 Jan  1  1970 7cb258e0-f4980796-4918-4766c02ae6a8
drwxr-xr-x    1 root     root             8 Jan  1  1970 8c231eec-aa4ac1c8-4b1d-14cf7ae5845e


~ # ls -al /
drwxr-xr-x    1 root     root           512 Jan  1 22:21 .
drwxr-xr-x    1 root     root           512 Jan  1 22:21 ..
-rw-------    1 root     root            30 Jan  1 22:22 .ash_history
-r--r--r--    1 root     root            20 Aug  2 03:48 .mtoolsrc
lrwxrwxrwx    1 root     root            49 Jan  1 22:10 altbootbank -> /vmfs/volumes/7cb258e0-f4980796-4918-4766c02ae6a8
drwxr-xr-x    1 root     root           512 Jan  1 22:10 bin
lrwxrwxrwx    1 root     root            49 Jan  1 22:10 bootbank -> /vmfs/volumes/8c231eec-aa4ac1c8-4b1d-14cf7ae5845e
-r--r--r--    1 root     root        300059 Aug  2 03:48 bootpart.gz
drwxr-xr-x    1 root     root           512 Jan  1 22:22 dev
drwxr-xr-x    1 root     root           512 Jan  1 22:20 etc
drwxr-xr-x    1 root     root           512 Jan  1 22:10 lib
drwxr-xr-x    1 root     root           512 Jan  1 22:10 lib64
-r-x------    1 root     root         12704 Jan  1 20:50 local.tgz
lrwxrwxrwx    1 root     root             6 Jan  1 22:10 locker -> /store
drwxr-xr-x    1 root     root           512 Jan  1 22:10 mbr
drwxr-xr-x    1 root     root           512 Jan  1 22:10 opt
drwxr-xr-x    1 root     root        131072 Jan  1 22:22 proc
lrwxrwxrwx    1 root     root            22 Jan  1 22:10 productLocker -> /locker/packages/5.1.0
drwxr-xr-x    1 root     root           512 Jan  1 22:10 sbin
lrwxrwxrwx    1 root     root            57 Jan  1 22:10 scratch -> /vmfs/volumes/50de22a5-f4afafa0-f7b5-001517197ad8/.locker
lrwxrwxrwx    1 root     root            49 Jan  1 22:10 store -> /vmfs/volumes/50df4c30-82b86e61-8522-001517197ad8
drwxr-xr-x    1 root     root           512 Jan  1 22:10 tardisks
drwxr-xr-x    1 root     root           512 Jan  1 22:10 tardisks.noauto
drwxrwxrwt    1 root     root           512 Jan  1 22:20 tmp
drwxr-xr-x    1 root     root           512 Jan  1 22:10 usr
drwxr-xr-x    1 root     root           512 Jan  1 22:10 var
drwxr-xr-x    1 root     root           512 Jan  1 22:10 vmfs
drwxr-xr-x    1 root     root           512 Jan  1 22:10 vmimages
lrwxrwxrwx    1 root     root            17 Aug  2 03:48 vmupgrade -> /locker/vmupgrade

Reply
0 Kudos
a_p_
Leadership
Leadership
Jump to solution

As you can see from the output

~snip~

drwxr-xr-t    1 root     root          1260 Dec 28 23:08 50de22a5-f4afafa0-f7b5-001517197ad8
drwxr-xr-x    1 root     root             8 Jan  1  1970 50df4c30-82b86e61-8522-001517197ad8
lrwxr-xr-x    1 root     root            35 Jan  1 22:21 640GB_WD_Black_AALS-1 -> 50de22a5-f4afafa0-f7b5-001517197ad8

~snip~

the UUID is the one of an existing datastore "640GB_WD_Black_AALS-1".

This proves that with an already existing datastore, a ".locker" folder (i.e. a ScratchLocation) is automatically created.

André

Reply
0 Kudos
ph1b3r
Contributor
Contributor
Jump to solution

You're right.  What seems to happen is that if I start with only the USB stick plugged in, it will use /tmp/scratch... but the second I plug in any other hard drives (including non-partitioned ones that I partition) and reboot the scratch directory changes to one of the drives.

It's not that I have any real issue with this behavior, I just wish it was documented!  It's very counter-intuitive for the system to suddenly ignore a configured item and make itself a directory elsewhere... especially when the command line doesn't exactly offer a rich toolset for identifying where it is.

Thanks for all of your help!

Reply
0 Kudos