In the past, I used to occasionally go through and reclaim space on thin provisioned disks that had expanded. I was using a technique I've seen listed in lots of different places, involving running sdelete to zero out the empty space, then doing storage vmotion to migrate to a LUN with a different block size. This has worked for at least the last couple of years.
I'm trying it again today, and it's not working. When I do the storage vmotion, no space is reclaimed. I've triple-verified that I'm moving to LUNs with different block sizes. Just for testing purposes I've tried both the -z and -c option in sdelete. (My notes say -c is the way to go, but there's conflicting info out there, and the commands themselves seem to be flipped in sdelete documentation.)
The only thing that's changed is we've upgraded from version 4 to version 5 of vSphere. From what I can see, people claim it still works in v5, but that's all I've got to go on. Anybody else experiencing this.
Strange. You are of course sure there are space inside the VM to recover? (Empty recycle bin and all that.)
Are the LUNs you are using the same as before the vSphere upgrade, i.e. the same LUN you have used this technique on before?
Yep, there's definitely space to recover. One instance has a server using 350 of 350 assigned GB from VMware's perspective, but I can see more than 250GB free from the server's perspective.
The LUNs may be different. We migrated to a new storage system a while back, so all of them may be new. I hadn't considered that part. We're on Compellent now. Would that really somehow keep me from being able to manage the disk at the VMware level?
It was most the part that if you created the LUNs new at vSphere 5.x the blocksize will by default be the same (I know that you verified this, but still..).
The VM has still a thin disk - it did not by mistake change type to thick during the Storage vMotion wizard?
Definitely still different block sizes. I've gone into the properties for the different LUNs, and I can see some at 1, 2, 4, and 8 MB. I think the change to new storage happened before the 5.0 upgrade, too, so the LUNs are new-ish, but were created back in our 4.1 days.
I'm 99% sure VMware knew I wanted thin provisioning. I was in the advanced options window (multiple disks on the server, only moving one at a time) and left it at the default "same format as source" setting in the storage vmotion wizard. I'll try again, manually specifying thing provisioning, just in case that part has gone haywire.
I just upgraded to 5.0 and just started thin provisioning. I questioned some of the sizes being reported and thought I saw a KB about sizes not being reported correctly and I THINK the fix was to remove it from inventory and then re-add it.
KB Article -- VMware KB: Provisioned space may seem incorrect in vSphere client Inventory view of Virtual Machines...
That's an interesting one. I don't *think* that's what is going on here. When I drill down in the datastore, the size and provisioned size are both still at max. I think that means it's not just a vCenter error, right?
OK. I misunderstood and probably still might me, but check these out.
Did you upgrade your datastores to vmfs5 or create a fresh datastore using vmfs5? I think that also might make a difference.
I'll have to check with the other admin, as he did that work. I hope that's not it, because recreating and migrating 30 TB of data on new LUNs doesn't sound like much fun.
I'm struggling to make sense of that first article. It's got a lot of commands and technical terms I'm not familiar with, and doesn't explain things very well. (Or I need more coffee, maybe.) I don't understand at all why I'd be inputting a percentage of blocks to reclaim, or why that would ever be less than 100%? I also tend to draw the line at any VMware maintenance that requires the command line rather than the GUI as being more technical and more risky than I'm prepared to deal with.