trevlix
Contributor
Contributor

Size of virtual appliance?

I've been looking at the community virtual appliances and the uncompressed sizes range from 8 MB to over 700 MB.

What is everyone's opinions on a good size for one? Obviously, the smaller the better and the size will all depend on what is inside, but at what point does it become too big?

0 Kudos
26 Replies
bac
Expert
Expert

As noted by others in this thread, you should be able to trim that down a lot by removing extraneous modules that are not used by your guest distro.

0 Kudos
andy_mac
Enthusiast
Enthusiast

OK - Here's something that no-one has mentioned.

While building my VM & updating the packages installed, the used space has increased very minimally, however the size of my expandable vmdk has increased significantly - as has the compressed (zipped) vmdk.

What I think is happening is that the non-zeroed free space has increased (due to temp files etc that have been deleted), thus increasing the size of the compressed image.

To test this I uncompressed an image and deleted ~250mb of cached rpms, confirmed the amount of free space had increased by the same amount with df, then recompressed the image - result: same sized image (within 5mb).

Does anyone know of a linux tool to zero out free space? I know that this will increase the size of the uncompressed image (It's not massive anyway), but my real aim is to reduce the size of the compressed image as much as possible.

-Thoughts???

-Andrew

0 Kudos
andy_mac
Enthusiast
Enthusiast

OK - after much effort on google....

In each filesystem that you need to zero out, run the following command:

dd if=/dev/zero of=filler bs=1000

what this does is fills all free space in the filesystem with a single file called filler, which is filled with binary zeros. Once the command predictably dies and moans that the disk is full, you must delete the file:

rm filler

I have run this on a VM (without deleting anything else) and reduced the zipped size successfully from 786881 kb to 390247 kb, which is a hell of a lot easier to download....

-Andrew

0 Kudos
samwyse
Enthusiast
Enthusiast

In each filesystem that you need to zero out, run the

following command:

dd if=/dev/zero of=filler bs=1000

Once the command

predictably dies and moans that the disk is full, you

must delete the file:

rm filler

I have run this on a VM (without deleting anything

else) and reduced the zipped size successfully from

786881 kb to 390247 kb, which is a hell of a lot

easier to download....

Thanks for the posting. I'll be trying it out in the near future.

I'm going to presume that you're using non-preallocated disks. In VMware Server, at least, there's an option to defragment a virtual disk. What happens if you defragment after zapping the drive space?

Here are some other techniques that I'm using.

I've got two identical VMs, one is a dev system, the other is the "final". As I get things working on my dev box, I mount the other's disks and copy the files into it. This avoids a lot of disk writes to the pristine copy.

Some people use RAM-disks for /tmp, but I've created an independent non-presistent disk. Either has the feature that /tmp is cleared after each reboot; mine allows the possibility of pre-loading /tmp with caches and such.

I've also thought about making my boot drive non-persistent, and putting all of my data on a second drive that contains /home. That would make it easier to clean up \*everything*, not just /tmp.

0 Kudos
SamTzu
Contributor
Contributor

I doubt that there can be many "really" usufull apps that come fully preconfigured. Even the most basic apps usually still have "some" configuration that has to be done to get it to work. The simpler the better. I hope that the VApp (VirtualApp:) that wins will be the most popular (=most used) one. It should be noted that one of the three entry categories in the competition is "Consumer". (Not just for Developers or Server admins.) I belive that is the most important category. If we can come up with an VApp that will have MILLIONS of users, that would truly be worth the effort (and prize.)

Sam

0 Kudos
andy_mac
Enthusiast
Enthusiast

I'm going to presume that you're using

non-preallocated disks.

That is correct, however this doesn't seem to make the vmdk any/much bigger...

In VMware Server, at least,

there's an option to defragment a virtual disk. What

happens if you defragment after zapping the drive

space?

Probably nothing. Haven't tried it. I thought that the defrag option just defags the vmdk within NTFS, although I could be wrong - it has happened in the past... (in fact I welcome confirmation or correction on this).

Some people use RAM-disks for /tmp, but I've created

an independent non-presistent disk. Either has the

feature that /tmp is cleared after each reboot; mine

allows the possibility of pre-loading /tmp with

caches and such.

I like your idea - I might have to use that in my next release - not just for tmp, but for /var/cache/yum as well. My entry has already been posted and I have no plans to re-submit...

I've also thought about making my boot drive

non-persistent, and putting all of my data on a

second drive that contains /home. That would make it

easier to clean up \*everything*, not just /tmp.

Maybe, but that could be a pain for configs & updates...

-Andrew

http://www.global-domination.org

0 Kudos
samwyse
Enthusiast
Enthusiast

I've also thought about making my boot drive

non-persistent, and putting all of my data on a

second drive that contains /home. That would make

it easier to clean up \*everything*, not just /tmp.

Maybe, but that could be a pain for configs &

updates...

Actually, I think it would make things easier. One disk would be a non-persistent root, containing your typical 3rd-party LAMP installation,

the other would be persistant and have your code, data, etc. Strategic parts of the first disk's filesystem would be sym-linked into the second drive; i.e. "/var/http" would be a sym-link to "/home/http", etc. You could update the two disks independently of each other, easing your migration headaches. There are versions of Knoppix that, IIRC, do something similar to this.

0 Kudos