Fixing VM after extending a snapshotted disk

VMware ESX server 3.5. VM in question is a Windows Server 2003 DC. Critical production server. I could really appreciate someone's insight if they have been in this situation before! Note, I did do the cleanup as explained in KB 1646892, see below.

Two VMDKs, first is 60GB, second is 12 GB. I extended them to 68GB and 20GB respectively, just a few minutes after taking the snapshot (sigh).

Alright, you know, had I known this was a BAD idea, it wouldn't have been done. Anyway, here is the situation.

0. Needed to increase disk space on 2 physical VMDK's that VM was using. Usual plan - shutdown server, extend VMDK's, boot server, extend guest filesystem to fill new space.

1. Before doing this, I snapshotted the VM. Oh the irony--this has caused a big mess. I of course snapshotted it to help roll-back in case of disaster. Anyway, the VM only has this 1 snapshot on it.

2. I shutdown VM and extended the disks using vmkfstool.

3. Hit power-on in Virtual Infrastructure Client. Got a big bad error (didn't know you aren't supposed to extend disks when the VM has a snapshot on it). Tried to remove snapshot -- error message, but snapshot disappeared. Tried powering-on the VM, got "cannot open the disk '................dvt01-000001.vmdk' or one of the snapshot disks it depends on. Reason: the parent virtual disk has been modified since the child was created."

4. With VM powered off, found this KB: . Followed the KB carefully, at least I believe so, and edited 'dvt01.vmdk' and specified 126022602 as the sector size (old size of 60GB), and edited "dvt01_1.vmdk" to have (old) sector size of 25165824 (12GB disk).

dvt01.vmdk's relevant line is (this is how it reads now):

  1. Extent description

RW 126022602 VMFS "dvt01-flat.vmdk"

dvt01_1.vmdk's relevant line now reads:

  1. Extent description

RW 25165824 VMFS "dvt01_1-flat.vmdk"

After making the change according to the KB, I was able to boot the VM. However, the VM still has an issue, when I look at the files by SSHing into the ESX server, I still see all the little snapshot files, even though VI client does not show any snapshots. So I have a snapshot inconsistency problem. ALSO the size of the physical disk as reported by Disk Management in the Windows Server VM is the NEW enlarged size. While this is the size the VMDK's are in the VM's directory, it is inconsistent with VI client, which says the VMDK's are the old size. So I have a VMDK size reporting problem too.

Ultimately I need to fix these two inconsistencies, and then after that is done and the VM is in a good state, I will be expanding the disks, from their old sizes of 60GB and 12GB to 68GB and 20GB or possibly more. With the snapshot/VMDK inconsistencies, I feel if I expand the guest's filesystems now, I will see my VM go down in flames because I think the KB article I followed to reset the disk sizes to the old size made the new space in the VMDK's "invalid". If the guest attempts to use it, I bet VMware will panic and halt the VM (bad!). So I am not touching anything with the filesystems until the VM stuff is worked out. However, we are low on space, so this is a pressing matter. Thankfully, the VM is running at the moment, so I can goto bed at least for now.

Please help or offer insight! I can post any config files needed, and here is a listing of the VM's directory (as of right now):

# ls -lh

total 91G

drwxr-xr-x 1 root root 700 Oct 21 16:05 backup

-rw------- 1 root root 624M Oct 21 17:55 dvt01-000001-delta.vmdk

-rw------- 1 root root 307 Oct 21 16:09 dvt01-000001.vmdk

-rw------- 1 root root 1.5G Oct 21 16:07 dvt01-0328ac7a.vswp

-rw------- 1 root root 64M Oct 21 17:54 dvt01_1-000001-delta.vmdk

-rw------- 1 root root 249 Oct 21 16:09 dvt01_1-000001.vmdk

-rw------- 1 root root 20G Oct 21 14:10 dvt01_1-flat.vmdk

-rw------- 1 root root 400 Oct 21 16:06 dvt01_1.vmdk

-rw------- 1 root root 68G Oct 21 14:10 dvt01-flat.vmdk

-rw------- 1 root root 8.5K Oct 21 16:07 dvt01.nvram

-rw------- 1 root root 399 Oct 21 16:06 dvt01.vmdk

-rw------- 1 root root 512 Oct 21 15:27 dvt01.vmsd

-rw-rr 1 root root 1.8K Oct 21 16:07 dvt01.vmx

-rw------- 1 root root 260 Oct 21 14:22 dvt01.vmxf

-rw-rr 1 root root 43K Sep 17 02:51 vmware-10.log

-rw-rr 1 root root 151K Oct 21 14:19 vmware-11.log

-rw-rr 1 root root 32K Oct 21 14:37 vmware-12.log

-rw-rr 1 root root 22K Oct 21 14:41 vmware-13.log

-rw-rr 1 root root 22K Oct 21 15:37 vmware-14.log

-rw-rr 1 root root 48K Aug 29 16:09 vmware-9.log

-rw-rr 1 root root 30K Oct 21 16:23 vmware.log

Thanks in advance!

0 Kudos
2 Replies

This is maybe more of a cop out then a fix but now you have the disk accessible, I would use the VMware convertor and V2V it. This will also allow you to adjust the size of your vmdk's at the same time. Once its done and tested you can delete the old one and all its issues.

Hope this helps,



From my perspective, I would try to keep business running, and for the moment leave the technical aspects in the background. With this in mind, I would create a new VM with the correct size disks, and then DCPROMO it. This will give you a fully-operational AD which can run the business while you destroy the old VM and recreate it. in any case, always try to have 'minimum' two ADs operational.

Just a thought.

Mark-Allen Perry

0 Kudos