This is something I'm in the process of building right now for our infrastructure. I'd like to see some "whys" for everyone's partition sizes if they vary from the norm.
For instance Steve, in your list you have /boot at 250MB. Why so large? Same goes for /home. What's stored there anymore?
Also, two other things:
\- What are people labeling their remaining space, that's generally allocated as a VMFS partition? e.g. VMFS_LOCAL. I used to use "local_vmfs" in esx2.0, but now with VC2 indexing the storage pools, I'm changing my ESX3 standard to "vmfsX_servername" where X is a letter (i.e. "a,b,c..." to avoid any possible confusion of a "vmfs1,2,3" being the VMFS type, not instance)
\- I have /vmimages partitions in ESX2. I recently read in the docs that "vmimages" are depricated. Which makes sense due to ISO mounts being located on VMFS volumes now. So is everyone pretty much eliminating their extra ext3 partitions now?
Thanks! Great thread idea. I haven't seen much reason to stear from the defaults yet, but I'd love to see what everyone else is doing before I complete my standards. My only change from ESX2 was to partition off /var.
>>For instance Steve, in your list you have /boot at 250MB. Why so large? Same goes for /home. What's stored there anymore?
I have read some thing around that people have had really good luck with 250mb. 100mb is all you need but my thoughts are what is 150mb in the scheme of things here. Just a drop in the hat. I also wanted alittle more then 100mb to plan for any future growth that might be needed as a just in case
part /boot --fstype ext3 --size 200 --ondisk cciss/c0d0
part swap --size 1600 --ondisk cciss/c0d0
part / --fstype ext3 --size 4000 --ondisk cciss/c0d0
part /var --fstype ext3 --size 2000 --ondisk cciss/c0d0
part /tmp --fstype ext3 --size 2000 --ondisk cciss/c0d0
part /vmimages --fstype ext3 --size 10000 --ondisk cciss/c0d0
part None --fstype vmkcore --size 100 --ondisk cciss/c0d0
part None --fstype vmfs3 --size 10000 --grow --ondisk cciss/c0d0
This is what i've been working with to date, i've not finalised this as yet though. I'm moving more to using a nfs server and may reduce the /vmimages to 5000.
Throughout the beta documentation VMware themselves weren't consistent with this one. I think they actually referenced 3 different values in all of their documentation ranging from 100-200MB. Why not play it safe and say 250? :-). I can't remember the last time someone referred to 250 as "large"
This is still a work in progress for me, but what I am planning on at this point is:
Vmfs3 whatever left
first 3 primary. I might make the /var bigger to minimize any growing logs hose your server scenario
Removing the /vmiamges and vmswap has left quite a bit of space, still trying to decide the best use of it.
Can't hurt to future proof the boot volume. It seems we will all have plenty of space this time around, as the ESX host turns more and more into the ultimate "runtime" appliance. I miss the vmimages slightly, and maybe will continue to use one. It certainly was handy when needing to get vmdk's in and out of VMFS.
Can't hurt to future proof the boot volume. It seems
we will all have plenty of space this time around, as
the ESX host turns more and more into the ultimate
"runtime" appliance. I miss the vmimages slightly,
and maybe will continue to use one. It certainly was
handy when needing to get vmdk's in and out of VMFS.
This is my thought as well. If Kimono hadn't posted it, I was going to. Disk space is cheap enough. Let's not shoot ourselves in the foot a 2nd time by creating partitions that are too small for current and future installations. By creating a 250MB /boot partition, we're already preparing for VI4.
On VI2 I was also always very generous with /home, /, /tmp, and /var.
below is my partition as reference
/boot : 100MB
/ : 5,120MB (5GB)
Swap : 540MB
/home : 2,048MB (2GB)
/tmp : 4,096MB (4GB)
/var : 4,096MB (4GB)
1 person found this helpful
Ha! Yes, 250 being stated as "large" was a misstatement on my part.
What I really meant there was 2.5 times the default value seemed like overkill, especially when only ~30MB is actually used.
As for the points regarding VI4 and future proofing: I must disagree.
How many of you are in-place upgrading from 2.5 -> 3? (and not just because in 2.5 the boot partition was <50M - even if you had all the proper partitioning setup prior to VI3, would you really be upgrading?)
To me, I see the VM host (when attached to a SAN) as disposable. Maintenace on a system by first using VMotion to remove all resources allows you to make any change you need down the road. Many of you that have so far responded are also kixstart users... Just reload the system in virtually notime based on your new standards - if necessary.
I think the whole point of VI is to not overengineer everything up front. Yes, the original and primary goal here is to not over-engineer the guests as we've done in the 1-to-1 physical world. But I'm trying to apply that methodology across the entire infrastructure...
Message was edited by:
Added point after 2.5 -> 3
I've been playing with ESX 3.x since beta and rc1 - and now I am working on the GA release...
Recommendations have varied from release to release in PDF's - and there is are differences in automatic partitioning from these recommendation too...
I guess we have been burned by partition recommendations from ESX 2.x which were "fit for purpose" when they were given - but under ESX 3.x have been found to cause some problems... such as loss of the "Debug" menu in grub caused by 50MB /boot partition.
So personally I am erring on the said of caution - and sticking with the over-allocation of disk space which seemed to happen in Beta/RC1 which changed in the GA documentation.
The way I look at it - is the partition table serves a couple of functions - ease of backup of COS (if required) and ensuring the / partition doesn't fill.
Local disk space (if that's where you are installing) is cheap - and with the loss of /vmimages as required partition there should be plent of free space:
Here's my partition table, my 2pence...
/boot ext3 250 X
n/a swap 1600 X
/ ext3 5120 X
/var/log ext3 2048
/tmp ext3 2048
NA vmkcore 100/100
Where X is primary
Where XX is fill to remaining
Note: No SAN/iSCSI storage in my lab so I need local VMFS for virtual disks)
Here's what I've decided on:
/boot - 250MB - Primary
Swap - 1600MB - Primary
/ - 8192MB - Primary
/var - 4096MB
/tmp - 4096MB
/home - 4096MB
Vmkcore - 100MB
Vmfs - Free Space
I'm trying to plan the partitions to be able to handle future growth and/or upgrade to ESX 4 (but who knows what the disk recommendations will be then )
In the grand scheme of things I think 22GB of usage + VMFS is minimal these days, especially if I don't have to worry about disk space down the road.
I am pretty close to going with this. Mainly because I prefer rounded numbers, 128 - 256 - 1024 etc... I am a little worried that root is a little large, but again like Mike said local DASD is cheap and I have to space.
part /boot --fstype ext3 --size 256 --ondisk cciss/c0d0 --asprimary
part / --fstype ext3 --size 8192 --ondisk cciss/c0d0 --asprimary
part swap --size 1600 --ondisk cciss/c0d0 --asprimary
part None --fstype vmfs3 --size 10240 --grow --ondisk cciss/c0d0
part None --fstype vmkcore --size 128 --ondisk cciss/c0d0
part /var --fstype ext3 --size 2048 --ondisk cciss/c0d0
part /tmp --fstype ext3 --size 2048 --ondisk cciss/c0d0
I'm in two minds after reading this and agree with both sides. There's a lot of good points made. It's useful to have futureproofed, but on one hand I've not considered upgrading ESX binaries. Fresh installs all the way. As ESX becomes more and more like the ultimate appliance, installing from scratch becomes more and more trivial.
Granted though if upgrading is there as an option and your partitioning allows it, then maybe it'd get used. Your customer could also dictate, maybe they'd want to be able to upgrade.
But, I think the "disk is cheap" rule will win out.
Cute thing about vmkcore, MAX size is 110M so no 128M for me. It actually failed install with over 128M.