VMware Cloud Community
BenH75
Contributor
Contributor
Jump to solution

Can't remove dumpfile created on vmfs volume - 5.5

I have a 5.5 server running with only local storage that I recently upgraded from 5.1.

Just a few days ago I had inventoried all of my VM folders on the following partition:

/vmfs/volumes/guest/

Today, while cleaning up some more defunct VMs, I find the following:

/vmfs/volumes/guest/vmkdump/44454C4C-4400-1048-805A-C7C04F4C4631.dumpfile

I have no idea what created this...I understand there are dumpfiles that may be created on PSODs, but I can't find any reference to an actual folder called "vmkdump."

When I try to delete it I get the following:

/vmfs/volumes/524f80a6-f18c10c7-2c16-001e4f204bc4/vmkdump # rm -rf 44454C4C-4400-1048-805A-C7C04F4C4631.dumpfile

rm: can't remove '44454C4C-4400-1048-805A-C7C04F4C4631.dumpfile': Device or resource busy

I've looked about at various KBs and I can't figure out where this is coming from:

- lsof doesn't list this file or directory

- all VMs have been shut off (maintenance mode)

- the server has been rebooted multiple times

I have found the following info that may be of help:

vmkernel.log shows (seemingly at boot):

vmkernel.log:2013-10-17T03:13:28.048Z cpu3:70444)FS3: 196: <START 44454C4C-4400-1048-805A-C7C04F4C4631.dumpfile>

vmkernel.log:2013-10-17T03:13:28.048Z cpu3:70444)FS3: 198: <END 44454C4C-4400-1048-805A-C7C04F4C4631.dumpfile>

vmkernel.log:2013-10-17T03:17:10.392Z cpu6:70981)FS3: 196: <START 44454C4C-4400-1048-805A-C7C04F4C4631.dumpfile>

vmkernel.log:2013-10-17T03:17:10.392Z cpu6:70981)FS3: 198: <END 44454C4C-4400-1048-805A-C7C04F4C4631.dumpfile>

vmkernel.log:2013-10-17T03:33:22.900Z cpu0:73786)FS3: 196: <START 44454C4C-4400-1048-805A-C7C04F4C4631.dumpfile>

vmkernel.log:2013-10-17T03:33:22.900Z cpu0:73786)FS3: 198: <END 44454C4C-4400-1048-805A-C7C04F4C4631.dumpfile>

vmkernel.log:2013-10-17T03:40:31.233Z cpu7:75668)FS3: 196: <START 44454C4C-4400-1048-805A-C7C04F4C4631.dumpfile>

vmkernel.log:2013-10-17T03:40:31.233Z cpu7:75668)FS3: 198: <END 44454C4C-4400-1048-805A-C7C04F4C4631.dumpfile>

vmkernel.log:2013-10-17T03:41:25.409Z cpu3:75805)FS3: 196: <START 44454C4C-4400-1048-805A-C7C04F4C4631.dumpfile>

vmkernel.log:2013-10-17T03:41:25.409Z cpu3:75805)FS3: 198: <END 44454C4C-4400-1048-805A-C7C04F4C4631.dumpfile>

vmkernel.log:2013-10-17T04:36:02.444Z cpu3:36299)FS3: 196: <START 44454C4C-4400-1048-805A-C7C04F4C4631.dumpfile>

vmkernel.log:2013-10-17T04:36:02.444Z cpu3:36299)FS3: 198: <END 44454C4C-4400-1048-805A-C7C04F4C4631.dumpfile>

I also get the following output with vmkfstools:

/vmfs/volumes/524f80a6-f18c10c7-2c16-001e4f204bc4/vmkdump # vmkfstools -D 44454C4C-4400-1048-805A-C7C04F4C4631.dumpfile

Lock [type 10c00001 offset 206995456 v 2409, hb offset 3920384

gen 27, mode 1, owner 525f6851-c5108b26-ce23-001e4f204bc4 mtime 645

num 0 gblnum 0 gblgen 0 gblbrk 0]

Addr <4, 465, 104>, gen 2238, links 1, type reg, flags 0, uid 0, gid 0, mode 100666

len 166723584, nb 159 tbz 0, cow 0, newSinceEpoch 159, zla 1, bs 1048576

According to KB 10051, the bolded number above is meant to indicate the MAC address of the lock holder.  What makes absolutely no sense to me is that the address listed there belongs to the second internal NIC in this system:

/vmfs/volumes/524f80a6-f18c10c7-2c16-001e4f204bc4/vmkdump # esxcli network nic list

Name    PCI Device     Driver  Link  Speed  Duplex  MAC Address         MTU  Description

------  -------------  ------  ----  -----  ------  -----------------  ----  -------------------------------------------------------------

vmnic0  0000:003:00.0  bnx2    Up     1000  Full    00:1e:4f:20:4b:c2  1500  Broadcom Corporation Broadcom NetXtreme II BCM5708 1000Base-T

vmnic1  0000:007:00.0  bnx2    Down      0  Half    00:1e:4f:20:4b:c4  1500  Broadcom Corporation Broadcom NetXtreme II BCM5708 1000Base-T

As you can see this NIC is in down state...it is not plugged in and has never (recently) been used for anything.

EDIT - Holy coincidence Batman!  It looks like the Mac address for my second card happens to match exactly the final UUID section of some of the VMFS volumes!

/vmfs/volumes # esxcli storage filesystem list

Mount Point                                        Volume Name  UUID                                 Mounted  Type             Size           Free

-------------------------------------------------  -----------  -----------------------------------  -------  ------  -------------  -------------

/vmfs/volumes/524f80a6-f18c10c7-2c16-001e4f204bc4  guests       524f80a6-f18c10c7-2c16-001e4f204bc4     true  VMFS-5  1999038840832  1153078132736

/vmfs/volumes/5250f9ba-1c2084a1-ccc8-001e4f204bc4  swap         5250f9ba-1c2084a1-ccc8-001e4f204bc4     true  VMFS-5   493384368128   492335792128

/vmfs/volumes/52583d27-570697be-161b-001e4f204bc4               52583d27-570697be-161b-001e4f204bc4     true  vfat       4293591040     4161273856

/vmfs/volumes/d789b775-b425486f-ac6f-1a3cee25ec3d               d789b775-b425486f-ac6f-1a3cee25ec3d     true  vfat        261853184       13754368

/vmfs/volumes/492adf74-59b63f60-76fa-01385be4c120               492adf74-59b63f60-76fa-01385be4c120     true  vfat        261853184       97005568

/vmfs/volumes/506469eb-d746eb51-3d04-001e4f204bc4               506469eb-d746eb51-3d04-001e4f204bc4     true  vfat        299712512       99147776

So the NIC information above seems to be a red herring....and the owner of the directory in question seems to simply be the volume itself (?)

I am concerned not just because I want to remove this file/directory from my datastore (or at least understand the need for it), but because of the small cryptic information I can ascertain so far I don't think it sounds good that a coredump is/was being created.

To the best of my knowledge, this system doesn't have any other issues, it boots up fine and apart from some other unrelated, minor errors I have seen in the various logs, I don't know what could be causing this.

Advice is very much welcome as to what is going on!

1 Solution

Accepted Solutions
zXi_Gamer
Virtuoso
Virtuoso
Jump to solution

Yes NightWing... the coredump to file is from 5.5

System size increases lead to larger ESXi core dumps. The coredump partition for new installations of ESXi 5.5 is 2.5GB. For upgrades from previous releases to ESXi 5.5, the core dump partition is limited to 100MB for ESXi crashes. For many large systems, this limit is not enough space and the core dump log files are truncated. The limit has an impact on the ability to triage ESXi system and kernel issues.

For upgrades with limits on 100MB partitions, during the boot process the system might create a dump file. If it does not create a dump file, you can manually create one with the esxcli system coredump file command.

The partition which got re-added is NOT from the vmfs partition, but from the coredump partition which is created during esxi installation.

No, you cannot have /vmkdump created if you do not have a coredump in vmfs.

If you suspect, that your esxi has crashed somewhere, look at /var/core.. If any coredump is located, then your box might  have dumped sometime, else you are good to go.

Also, in case of vmfs partition coredump configured, once you deleted the file, a new file will be in place if and only if your box has panicked or dumped core.

vSphere 5.5 Documentation Center

View solution in original post

Reply
0 Kudos
7 Replies
zXi_Gamer
Virtuoso
Virtuoso
Jump to solution

To the Batmobile Robin.......

The UUID you are seeing might be because sometime back you might have made vmnic1 as the primary nic and vmnic0 to be down and created the coredump partition.

However, try the following and let me know how it goes:

esxcli system coredump file list

This should list you the coredump that is active and configured and something tells me it will have the UUID of 00:1e:4f:20:4b:c4

Now, try to remove the file:

esxcli system coredump file remove -f  <filepath from the first command>

BenH75
Contributor
Contributor
Jump to solution

Golly Gee Batman - it looks like we're getting somewhere:

~ # esxcli system coredump file list

Path                                                                                                     Active  Configured       Size

-------------------------------------------------------------------------------------------------------  ------  ----------  ---------

/vmfs/volumes/524f80a6-f18c10c7-2c16-001e4f204bc4/vmkdump/44454C4C-4400-1048-805A-C7C04F4C4631.dumpfile    true        true  166723584

~ # esxcli system coredump file remove -f /vmfs/volumes/524f80a6-f18c10c7-2c16-001e4f204bc4/vmkdump/44454C4C-4400-1048-805A-C7C04F4C4631.dumpfile

Dump file /vmfs/volumes/524f80a6-f18c10c7-2c16-001e4f204bc4/vmkdump/44454C4C-4400-1048-805A-C7C04F4C4631.dumpfile currently active, --force is required

/vmfs/volumes/524f80a6-f18c10c7-2c16-001e4f204bc4/vmkdump # esxcli system coredump file set -u

/vmfs/volumes/524f80a6-f18c10c7-2c16-001e4f204bc4/vmkdump # esxcli system coredump file list

Path                                                                                                     Active  Configured       Size

-------------------------------------------------------------------------------------------------------  ------  ----------  ---------

/vmfs/volumes/524f80a6-f18c10c7-2c16-001e4f204bc4/vmkdump/44454C4C-4400-1048-805A-C7C04F4C4631.dumpfile   false       false  166723584

/vmfs/volumes/524f80a6-f18c10c7-2c16-001e4f204bc4 # esxcli system coredump partition set -u

/vmfs/volumes/524f80a6-f18c10c7-2c16-001e4f204bc4 # esxcli system coredump partition list

Name                                    Path                                                        Active  Configured

--------------------------------------  ----------------------------------------------------------  ------  ----------

naa.6001e4f021c59c0014f6b8ea0dd96616:7  /vmfs/devices/disks/naa.6001e4f021c59c0014f6b8ea0dd96616:7   false       false

So, looks like i got it removed now that you introduced me to the coredump command. A few more questions though:

1) Was this file being created normal for a normally functioning 5.5 system?  What is its actual purpose?  Is the fact that it was there indicative of me having (or having had) a problem?

2) Should I re-enable it?  I don't recall it ever being there before, so not sure why I need it...

3) Can you explain further your opening statement: "The UUID you are seeing might be because sometime back you might have made vmnic1 as the primary nic and vmnic0 to be down and created the coredump partition" ?

I am almost completely sure that vmnic1 was never used since I upgraded to 4.5 or 5.1 (that was a full reinstall).  I'm not sure why the UUID on those 4 partitions match the MAC on the NIC... could it be anything but coincidence?

Or does ESX actually create the UUID based on the MAC of the NIC and this is normal?

I'm also very unclear on how/why the dump file was in the location it was.  I ran the following (before I unconfigured/removed the file):

~ # esxcli system coredump partition get

   Active: naa.6001e4f021c59c0014f6b8ea0dd96616:7

   Configured: naa.6001e4f021c59c0014f6b8ea0dd96616:7

~ # ls -alh /vmfs/devices/disks

total 4879940513

drwxr-xr-x    1 root     root         512 Oct 17 15:23 .

drwxr-xr-x    1 root     root         512 Oct 17 15:23 ..

-rw-------    1 root     root      464.5G Oct 17 15:23 naa.6001e4f021c59c0014f6b8ea0dd96616

-rw-------    1 root     root      896.0M Oct 17 15:23 naa.6001e4f021c59c0014f6b8ea0dd96616:1

-rw-------    1 root     root        4.0G Oct 17 15:23 naa.6001e4f021c59c0014f6b8ea0dd96616:2

-rw-------    1 root     root      459.6G Oct 17 15:23 naa.6001e4f021c59c0014f6b8ea0dd96616:3

-rw-------    1 root     root        4.0M Oct 17 15:23 naa.6001e4f021c59c0014f6b8ea0dd96616:4

-rw-------    1 root     root      250.0M Oct 17 15:23 naa.6001e4f021c59c0014f6b8ea0dd96616:5

-rw-------    1 root     root      250.0M Oct 17 15:23 naa.6001e4f021c59c0014f6b8ea0dd96616:6

-rw-------    1 root     root      110.0M Oct 17 15:23 naa.6001e4f021c59c0014f6b8ea0dd96616:7

-rw-------    1 root     root      286.0M Oct 17 15:23 naa.6001e4f021c59c0014f6b8ea0dd96616:8

-rw-------    1 root     root        1.8T Oct 17 15:23 naa.6001e4f021c59c0017f1e3290cbdcf6b

-rw-------    1 root     root        1.8T Oct 17 15:23 naa.6001e4f021c59c0017f1e3290cbdcf6b:1

lrwxrwxrwx    1 root     root          36 Oct 17 15:23 vml.02000000006001e4f021c59c0014f6b8ea0dd96616504552432036 -> naa.6001e4f021c59c0014f6b8ea0dd96616

lrwxrwxrwx    1 root     root          38 Oct 17 15:23 vml.02000000006001e4f021c59c0014f6b8ea0dd96616504552432036:1 -> naa.6001e4f021c59c0014f6b8ea0dd96616:1

lrwxrwxrwx    1 root     root          38 Oct 17 15:23 vml.02000000006001e4f021c59c0014f6b8ea0dd96616504552432036:2 -> naa.6001e4f021c59c0014f6b8ea0dd96616:2

lrwxrwxrwx    1 root     root          38 Oct 17 15:23 vml.02000000006001e4f021c59c0014f6b8ea0dd96616504552432036:3 -> naa.6001e4f021c59c0014f6b8ea0dd96616:3

lrwxrwxrwx    1 root     root          38 Oct 17 15:23 vml.02000000006001e4f021c59c0014f6b8ea0dd96616504552432036:4 -> naa.6001e4f021c59c0014f6b8ea0dd96616:4

lrwxrwxrwx    1 root     root          38 Oct 17 15:23 vml.02000000006001e4f021c59c0014f6b8ea0dd96616504552432036:5 -> naa.6001e4f021c59c0014f6b8ea0dd96616:5

lrwxrwxrwx    1 root     root          38 Oct 17 15:23 vml.02000000006001e4f021c59c0014f6b8ea0dd96616504552432036:6 -> naa.6001e4f021c59c0014f6b8ea0dd96616:6

lrwxrwxrwx    1 root     root          38 Oct 17 15:23 vml.02000000006001e4f021c59c0014f6b8ea0dd96616504552432036:7 -> naa.6001e4f021c59c0014f6b8ea0dd96616:7

lrwxrwxrwx    1 root     root          38 Oct 17 15:23 vml.02000000006001e4f021c59c0014f6b8ea0dd96616504552432036:8 -> naa.6001e4f021c59c0014f6b8ea0dd96616:8

lrwxrwxrwx    1 root     root          36 Oct 17 15:23 vml.02000000006001e4f021c59c0017f1e3290cbdcf6b504552432036 -> naa.6001e4f021c59c0017f1e3290cbdcf6b

lrwxrwxrwx    1 root     root          38 Oct 17 15:23 vml.02000000006001e4f021c59c0017f1e3290cbdcf6b504552432036:1 -> naa.6001e4f021c59c0017f1e3290cbdcf6b:1

So it is showing the naa.6001e4f021c59c0014f6b8ea0dd96616:7 partition is only 110 MB and "esxcli system coredump partition list" shows it as the only "partitions on the system that have a partition type matching the VMware Core partition type."

Yet, it was showing up under my VMFS /guests datstore as /vmkdump/xxxx.dumpfile?

Was/is vmkdump really just a mount point under my VMFS partition to a separate (otherwise invisible) coredump partition?

If this was just a standard linux system "mount" would easily show me this, but I'm not sure how to get that same type of info (more than I showed above) from ESXi.

Thanks!

UPDATE:

When I rebooted my server I got the following:

/dev/disks # esxcli system coredump partition get

   Active: naa.6001e4f021c59c0014f6b8ea0dd96616:7

   Configured: naa.6001e4f021c59c0014f6b8ea0dd96616:7

/dev/disks # esxcli system coredump partition list

Name                                    Path                                                        Active  Configured

--------------------------------------  ----------------------------------------------------------  ------  ----------

naa.6001e4f021c59c0014f6b8ea0dd96616:7  /vmfs/devices/disks/naa.6001e4f021c59c0014f6b8ea0dd96616:7    true        true

So it looks like it reactivated the parition even though I specifically disabled it.  There is no dump directory/file anymore however...so that part is good.

I did find the following article which had some additional good info on creating dump partitions:

HOW TO: Configure Shared Diagnostic Partition on VMware ESX host &amp;laquo; vStrong.info

However it still doesn't really answer the question as to how/why exactly the "vmkdump" file was created in my guests datastore.  There is not a single reference on Google that talks about the existence/creation of a folder called "vmkdump" - only references to a legacy vmkdump utility?

Also, can't find any reference to *.dumpfile named files - or even any indication that the coredump partition is *supposed* to be visible at all.  I have however found the following in my /var/core partition:

/vmfs/volumes/52583d27-570697be-161b-001e4f204bc4/core # ls -l

total 94608

-rwx------    1 root     root      31891456 Oct 16 06:14 hostd-worker-zdump.001

-rwx------    1 root     root      31825920 Oct 16 07:30 hostd-worker-zdump.002

-rwx------    1 root     root      33161216 Oct 17 04:35 hostd-worker-zdump.003

Should I be concerned?

Reply
0 Kudos
zXi_Gamer
Virtuoso
Virtuoso
Jump to solution

Not to worry BoyWonder..

IIRC, the introduction of a dumpfile in the vmfs was introduced in 5.1 This is because earlier, the coredump was set default at 110MB. This would hold good for a minimal system of 8GB or 16GB. But with higher systems in play and ranging as big as 2TB Memory support, such a corefile size would not be sufficient.

Hence, if you have a heavy system where the default 110MB could not hold the coredump, then you can use the dumpfile in the vmfs volume.

Else, you can deactivate the one in the storage side by esxcli command and using the VMware cordump parition, created during installation, in your case naa.6001e4f021c59c0014f6b8ea0dd96616:7

No, you need not be concerned, since it is a method to capture the full coredump in case of a system failure..

Cheers Robin..

BenH75
Contributor
Contributor
Jump to solution

Still a bit confused Caped Crusader...

So there are two options?  A dump partition *or* a .dumpfile ?

If you look at my update to my last post, you'll see that i used esxcli to deactivate the partition, but that it got re-added after reboot.

Also, since I was using the *partition* apparently....why was there a /vmkdump folder in my VMFS?

In other words...the way I am understanding it to work normally is that there is a "hidden" coredump partition that then copies the dumps over to /var/core.

Your'e not supposed to have a /vmkdump folder created... .I can't find any information at all about this method.

Also, and probably most importantly...how worried should I be that this file exsited...not only because I don't know why it was configured that way...but does it indicate I actually had a coredump and crash at somepoint?

thanks again

Reply
0 Kudos
zXi_Gamer
Virtuoso
Virtuoso
Jump to solution

Yes NightWing... the coredump to file is from 5.5

System size increases lead to larger ESXi core dumps. The coredump partition for new installations of ESXi 5.5 is 2.5GB. For upgrades from previous releases to ESXi 5.5, the core dump partition is limited to 100MB for ESXi crashes. For many large systems, this limit is not enough space and the core dump log files are truncated. The limit has an impact on the ability to triage ESXi system and kernel issues.

For upgrades with limits on 100MB partitions, during the boot process the system might create a dump file. If it does not create a dump file, you can manually create one with the esxcli system coredump file command.

The partition which got re-added is NOT from the vmfs partition, but from the coredump partition which is created during esxi installation.

No, you cannot have /vmkdump created if you do not have a coredump in vmfs.

If you suspect, that your esxi has crashed somewhere, look at /var/core.. If any coredump is located, then your box might  have dumped sometime, else you are good to go.

Also, in case of vmfs partition coredump configured, once you deleted the file, a new file will be in place if and only if your box has panicked or dumped core.

vSphere 5.5 Documentation Center

Reply
0 Kudos
BenH75
Contributor
Contributor
Jump to solution

Nice sleuthing Detective....

This makes more sense...would be nice if the documentation gave more info on:

"...during the boot process the system might create a dump file. If it does not create a dump file, you can manually create one with the esxcli system coredump file command."

Why "might" it?  The only think I have in /var/core is:

/vmfs/volumes/52583d27-570697be-161b-001e4f204bc4/core # ls -l

total 104748

-rwx------    1 root     root      33943552 Oct 18 04:27 hostd-worker-zdump.000

-rwx------    1 root     root      37683200 Oct 18 04:03 hostd-worker-zdump.002

-rwx------    1 root     root      35635200 Oct 18 04:26 hostd-worker-zdump.003

No vmkernel dumps....and I can't find any info at all about this hostd dumps either - and vmkdump doesn't work on these, only kernel dumps.

So, according to your advice, I shouldn't have much to worry about because I don't have a kernel dump...BUT...there *was* a .dumpfile actually created there at one point...which leads me to believe that something did go wrong. 😕

Only other thing I am curious about is the output of this command:

/vmfs/volumes/524f80a6-f18c10c7-2c16-001e4f204bc4/vmkdump # vmkfstools -D 44454C4C-4400-1048-805A-C7C04F4C4631.dumpfile

Lock [type 10c00001 offset 206995456 v 2409, hb offset 3920384

gen 27, mode 1, owner 525f6851-c5108b26-ce23-001e4f204bc4 mtime 645

num 0 gblnum 0 gblgen 0 gblbrk 0]

Addr <4, 465, 104>, gen 2238, links 1, type reg, flags 0, uid 0, gid 0, mode 100666

len 166723584, nb 159 tbz 0, cow 0, newSinceEpoch 159, zla 1, bs 1048576

Any idea what that owner UUID might have been that was locking down the file?  I mean obviously it was locked by the system being configured as:

/vmfs/volumes/524f80a6-f18c10c7-2c16-001e4f204bc4/vmkdump # esxcli system coredump file list

Path                                                                                                     Active  Configured       Size

-------------------------------------------------------------------------------------------------------  ------  ----------  ---------

/vmfs/volumes/524f80a6-f18c10c7-2c16-001e4f204bc4/vmkdump/44454C4C-4400-1048-805A-C7C04F4C4631.dumpfile   true true 166723584

But I'm just curious as to what the UUID in the owner field might have been referencing,

Thank again for all your insight!

Reply
0 Kudos
chrklee
Enthusiast
Enthusiast
Jump to solution

Hi,

let me clarify a little bit why we added the dump file to ESXi.

As it got mentioned earlier, the coredump partition of older ESX(i) hosts is only 110 MB large, and we have seen coredumps that are larger than that size when collected through netdump, or we have seen incomplete coredumps in case of deployments where only the 110 MB coredump partition was available. With the introduction of ESXi 5.1 we enabled an option to use larger coredump partitions through the boot option "diskDumpSlotSize", however the default partition size still remained at 110 MB, which means a customer needs to reconfigure the host manually. See KB 2012362 [1] for details.

To avoid the manual reconfiguration step, we introduced with ESXi 5.5 the coredump file feature. It allows ESXi to determine whether the current host might try to generate a coredump that won't fit into the coredump partition. If such a condition has been identified, ESXi will automatically create the file /vmfs/volumes/<some datastore>/vmkdump/<HW-UUID>.dumpfile with the appropriate size.

This means just the presence of the *.dumpfile is no indication of a problem with ESXi, it just indicates that the host is prepared for the case of crash of ESXi and should enable VMware support to retrieve a complete coredump. In case of a crash of ESXi, ESXi will extract the coredump out of the configured dump file and create a vmkernel-zdump* file in /var/core during the first reboot following the crash.

For new ESXi 5.5 deployments the coredump partition got increased to 2.5 GB, so that for most fresh deployments a coredump file wouldn't be required.

If a customer wants to explicitly disable the coredump file and avoid the auto-generation during reboot, the boot option "autoCreateDumpFile" can be set to FALSE:

# List the current settings for the autoCreateDumpFile boot option

esxcli system settings kernel list -o autoCreateDumpFile

Name                Type  Description                                                                            Configured  Runtime  Default

------------------  ----  -------------------------------------------------------------------------------------  ----------  -------  -------

autoCreateDumpFile  Bool  If enabled and if no suitable dump partition or dump file exists, create a dump file.  TRUE        TRUE     TRUE  

# Change the boot option

esxcli system settings kernel set -s autoCreateDumpFile -v FALSE

# Verify that the configuration change took place

esxcli system settings kernel list -o autoCreateDumpFile

Name                Type  Description                                                                            Configured  Runtime  Default

------------------  ----  -------------------------------------------------------------------------------------  ----------  -------  -------

autoCreateDumpFile  Bool  If enabled and if no suitable dump partition or dump file exists, create a dump file.  FALSE       TRUE     TRUE  

As I mentioned earlier, the HW-UUID is used to associate a dump file to a specific host. A customer can find out what the host's UUID is by executing the following command:

esxcli hardware platform get

Platform Information

   UUID: 0x34 0x33 0x38 0x34 0x37 0x34 0x53 0x55 0x45 0x32 0x35 0x32 0x52 0x30 0x57 0x46

   Product Name: ProLiant ML350 G6

   Vendor Name: HP

   Serial Number: USE252R0WF     

   IPMI Supported: true

You can see that the HW UUID defines the beginning of the file name:

esxcli system coredump file list

Path                                                                                                     Active  Configured       Size

-------------------------------------------------------------------------------------------------------  ------  ----------  ---------

/vmfs/volumes/50e7202d-cddb6e78-1bba-b4b52f6983b4/vmkdump/34333834-3734-5355-4532-353252305746.dumpfile    true        true  251658240

Thanks,

Christoph

[1] VMware KB:    ESXi hosts with larger workloads may generate partial core dumps