VMware Cloud Community
scale21
Enthusiast
Enthusiast

esxi boot partitioning questoin

we started partitioning our hosts with a  4gb boot from SAN lun.

This worked fine. Later we went back through and expanded the 4gb to 10gb which gave us a bit later vmfs volume on the boot LUN. We have been storing the host logs here for each host via the Advanced SYSLOG setting. Probably not best practice but it is what it is.

Recently i provisioned a new host. I created a 10gb lun right away and installed esxi to that. When it was completed i went to create my VMFS volume on it and i could only create a tiny 1.9gb volume which i thought was odd.

I show a lot of larger "Legacy MBR" partitions on this host which are bigger than my previous deployed hosts. I am not sure why the partitions are larger in this setup vs setting the LUN to 4gb and expanding it later. I am wondering if i can somehow DELETE or remove those Legacy MBR partitions or if it would be easier to blow away the host, reload it on a 4gb partition and then expand it as i have the others?

My fear is that my logs will over run my available space on this small 1.9gb vmfs partition.

0 Kudos
4 Replies
Techie01
Hot Shot
Hot Shot

When you installed esxi on 4GB, i am sure that you did not had scratch partition created and most probably no vmkcore partition too .  To confirm this post the output of partedutil getptbl <vmfs/devices/disks/<bootlun naaid> >

The scrath partition is 2 and vmkcore is 9

The esxi installer need minimum 5.2 GB free space on the boot lun to create a scratch and vmfs parition.

When you gave 10GB disk as part of boot lun, it should have created the scratch and core partition and that should have consumed the additonal space.

Following is the output from my machine for boot disk

partedUtil getptbl /vmfs/devices/disks/<naa id>

1 64 8191 C12A7328F81F11D2BA4B00A0C93EC93B systemPartition 128    --> boot partition

5 8224 520191 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0       ---> bootbank

6 520224 1032191 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0  ----> bootbank

7 1032224 1257471 9D27538040AD11DBBF97000C2911D1B8 vmkDiagnostic 0  ----> coredump parition

8 1257504 1843199 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0    ----> locker/tools image

9 1843200 7086079 9D27538040AD11DBBF97000C2911D1B8 vmkDiagnostic 0   ---->coredump ( for large dump space)

2 7086080 15472639 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0      ---->scratch ( 4GB)

3 15472640 585871930 AA31E02A400F11DB9590000C2911D1B8 vmfs 0     ---> vmfs  ( on the free space)

0 Kudos
scale21
Enthusiast
Enthusiast

Thank you. That helped a lot.

Here is the host that was deployed directly on the 10gb boot partition from the start:

1 64 8191 C12A7328F81F11D2BA4B00A0C93EC93B systemPartition 128

5 8224 520191 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0

6 520224 1032191 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0

7 1032224 1257471 9D27538040AD11DBBF97000C2911D1B8 vmkDiagnostic 0

8 1257504 1843199 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0

9 1843200 7086079 9D27538040AD11DBBF97000C2911D1B8 vmkDiagnostic 0

2 7086080 15472639 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0

3 15472640 20971486 AA31E02A400F11DB9590000C2911D1B8 vmfs 0

As you can see, the #2 is there and the #9

Here is the 4gb host that was deployed and expanded:

1 64 8191 C12A7328F81F11D2BA4B00A0C93EC93B systemPartition 128

5 8224 520191 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0

6 520224 1032191 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0

7 1032224 1257471 9D27538040AD11DBBF97000C2911D1B8 vmkDiagnostic 0

8 1257504 1843199 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0

2 1843200 20971486 AA31E02A400F11DB9590000C2911D1B8 vmfs 0

Here i have no #9.

With that said, is there any risk or issue with not having the vmcoredump partition shown on our 4gb hosts that were expanded?

I assume that if our host has an issue, we wont be able to pull core dump info on these hosts as the system has no place to put it in the event of a purple screen?

I assume you want this partition.

0 Kudos
a_p_
Leadership
Leadership

I guess that VMware needed to increase the diagnostic partition's size from 110MB to 2.5GB to be able to collect sufficient data for troubleshooting purposes. If this is the case you shouldn't experience issues in production due to the "missing" partition. I'm not aware of a recommendation to reconfigure (reinstall) a host due to this.

For some more information see http://www.virtuallyghetto.com/2014/06/two-coredump-partitions-in-esxi-5-5.html

André

0 Kudos
scale21
Enthusiast
Enthusiast

interesting. This solves the mystery. I kind of want to go back and reload everything now but given we use the 1000v and other things....it probably isn't worth it at this time. I will continue forward with my 10gb host as is, as this appears to be normal for  fresh load on a larger boot LUN.

0 Kudos