in one of my customers I had to relocate all C:\ drives of the virtual machines to new LUNs (actually every single C:\ drive has its own LUN on a FC disk and we have to relocate them on sata disks, please don't ask me why it's like that )
We have most of the VMs with single C:\ drive (one has a single C:\ drive plus a RDM) and all of them could be svmotioned succesfully.
Unfortunately I have a problem with svmotion with other 2 VMs: both of them have a C:\ drive, a RDM and a D:\ drive - which (the D:\ drive) has not to be migrated
The svmotion of C:\ drive of these 2 VMs fails with this error (copied from CLI):
Use of uninitialized value in concatenation (.) or string at C:/Programmi/VMware/VMware VI Remote CLI/Perl/lib/VMware/VICommon.pm line 1502, line 10.
Received an error from the server: Insufficient disk space on datastore 'sata_SRV_S5_DiscoC'.
I read that the problem could be caused by low free space on source datastore 'cause svmotion does it using snapshot and actually this is free space situation on source datastores of the 2 VMs that fails:
32G 31G 828M 97% /vmfs/volumes/SRV_EXC_DiscoC
23G 23G 355M 98% /vmfs/volumes/SRV_S5_DiscoC
the source free space is very low, but I had the same % of free space in the VMs that svmotioned succesfully, so I don't think this can be a valid reason for the error.
Also the destination datastore has the same size of the source one, and this gave no errors on other VMs (same situation).
The only difference from the VMs that svmotioned succesfully and these VMs is the fact that these 2 that fails have D:\ drives, but with svmotion.pl --interactive I did svmotion only C:\ drive and answered no to the question of svmotion other disks.
Also removing D:\ drive from the VMs before trying to svmotion C:\ drive gave me the same error.
What could be the cause?
Do you have any hints?
Ok, I have a question. If you can shutdown the vm to remove the 😧 drive, then, why not do a cold migration of the server? Shut the vm down, select migrate, change datastores, and wait until completion?
using svmotion when the VM was offline, gave the same error...
anyway we just did a try with one VM, but the other cannot be shutdown at the moment, so we have to make it work when running.
Is your RDM in physical or virtual compatiblity mode?
it's in physical mode in both the VM that svmotioned succesfully and in the one VM that is not svmotioning.
are you using a plugin or doing it from the RCLI? If from the RCLI you are doing it in interactive mode?
I have had good success with SVMotion using CLI in interactive mode. I think there is a problem if RDM is in physical mode. have been sucessfull with RDM in virtual mode.
Do you have snapshots?
i'm using RCLI in interactive mode (svmotion.pl --interactive)
I don't think it's a problem with RDM in physical mode, since I succesfully svmotioned one VM with a RAW disk in Physical compatibility mode, so if it worked for that one, it should work for this one either.
No I don't have any snapshot (I know that snapshots prevent svmotion to work correctly)
any other suggestion?
Can you increase the size of the sata datastore by 1 GB and see if the svmotion continues?
do you mean the destination datastore? it would mean expand LUN, I'll have to see if there's free space...
but the question is, why some VM with source and destination datastore size equal to the one that fails, svmotioned succesfully?
Are all vm's configured the same? What about RAM? How much ram is in your vm, and would the swap file from your vm's memory requirement, added to your c drive, put you over the destination datastore size?
no they are not all configured the same.
This is the specs:
VM1: W2K3 ent. - 4vCPU / 2GB RAM - 000MB reservation / C: size 19.75GB with 368MB free -> SVMOTION succesful
VM2: W2K3 ent. - 2vCPU / 4GB RAM - 000MB reservation / C: size 44.75GB with 268MB free -> SVMOTION succesful
VM3: W2K3 ent. - 4vCPU / 8GB RAM - 000MB reservation / C: size 32.25GB with 826MB free + 😧 size 299.75GB with 712MB free -> SVMOTION of C: unsuccesful
VM4: W2K3 ent. - 4vCPU / 4GB RAM - 000MB reservation / C: size 19.75GB with 1.36GB free -> SVMOTION succesful
VM5: W2K3 ent. - 2vCPU / 2GB RAM - 000MB reservation / C: size 27.75GB with 336MB free -> SVMOTION succesful
VM6: W2K3 ent. - 2vCPU / 2GB RAM - 000MB reservation / C: size 29.75GB with 2.32GB free -> SVMOTION succesful
VM7: W2K3 ent. - 1vCPU / 4GB RAM - 000MB reservation / C: size 23.75GB with 354 free + 😧 size 19.75GB with 371MB free -> SVMOTION of C: unsuccesful
VM8: W2K3 ent. - 1vCPU / 1GB RAM - 000MB reservation / C: size 29.75GB with 3.32GB free -> SVMOTION succesful
VM9: W2K3 ent. - 1vCPU / 2GB RAM - 256MB reservation / C: size 24.75GB with 4.34GB free -> SVMOTION succesful
Ok, I think the swap file is what is killing you right now. Remember when you talk about datastore space usage. Along with the size of your physical disk, the virtual machine swap file takes up space also. To figure out the swap file, you take the allocated memory, and subtract your reservation, and you will have your swap file size. In your case, 8 GB allocation, and 0 reservation, and your vm should require an additional 8 GB of disk space where your vm config files live. This is assuming you used the default config and are not placing your vm swap files in a local datastore separate from your vm files.
Now, the question I'm sure you're asking is why the others were successful. Good question, and I don't have a good answer. Check your datastore, and see if your have vm specific .vswp files in your datastores, and see if they exist in the new datastore or the old one.
mh i'm still doubtful, since all VMs have their own .vswp file in Datastore, and the one who SVMOTION succesfully have the .vspw file migrated to the new Datastore...so I don't understand what's the problem...
in the case of the unsuccesful migration, source and destination datastore are equal on size, so they should be able to contain the same files...
Try a test. Change the unsuccesful vm's memory reservation from 0, to 8 GB. See if the vmotion is successful.
I can also do this test, but we already tried to move one of the two VMs that failed when it was off (so no swapfile present) and it failed even in that way 😕