Hi,
When I try to remove a datastore from vsphere I get the following error:
Call "HostDatastoreSystem.RemoveDatastore" for object "datastoreSystem-15" on vCenter Server "xxxxxx" failed.
The datastore is emtpy and we want to unpresent it to the hosts. Is there a way to remove a datastore via the CLI? If I unpresented it to the hosts via the SAN would this cause any issues?
Try to remove the datastore by connecting to one of the ESX hosts directly using vSphere client first.
If that fails we can then try to delete the datastore from the command line.
If you found this or other information useful, please consider awarding points for "Correct" or "Helpful".
Regards,
Arun
VCP3, VCP4, HPCP, HP UX CSA
Removing from the host directly gives:
Call "HostDatastoreSystem.RemoveDatastore" for object "ha-datastoresystem" on ESX "10.0.0.47" failed
I had the exact same error. Renamed the datastore, then deleted it without probs
Everything is removed from the datastore I tried renaming the datastore I am getting the same error. Call "HostDatastoreSystem.RemoveDatastore
Then in the Tasks list is says my datastore is in use I have no idea how.
Are you sure that you don't have any VM which may think it has something on that datastore? Maybe a connected ISO image or a VMDK that may have been there at some time?
I just called VMware support but no, I dont have any ISO's attached to it.
One odd thing is that this was the datastore for our test View setup. After about an hour this folder automatically appears even thoguh I delete it .naa6006048C04083591b50ac1488dcbd2ac with one file in the directory called "Slotsfile" that is .63KB
I figured it out, I had to remove storage I/O Control through Vcenter before I could delete the datastore. Hopefully this helps someone else who had this issue. An interesting point as well is that when I tried to remove the datastore through Vcenter I got that generic message everyone else got. However, when I loaded just the vi client for the ESX host and attempted a delete it gave me the message saying that Storage I/O is on. So when in doubt dont use Vcenter go right to the ESX host.
I have faced same issue, it was resolve after deleting datastore using command line.
partedUtil delete "/vmfs/devices/disks/DeviceName" PartitionNumber
I have followed VMware KB :
Hi Everybody,
I was with the same problem
It was solved by shutting down the running virtual machines , connect directly into host and then delete successfully the LUN
Regards,
Reinaldo
You cannot remove an inactive NFS datastore with Storage I/O Control enabled. However, when the back end volume is ungracefully removed, Storage I/O Control cannot be disabled.
Check this KB for more information about this
VMware KB: Unable to remove an inaccessible NFS datastore with Storage I/O control enabled
Best regards
Yours, Oscar
Hi,
If the datastore which you want to remove stores ISO files, one of ISO files might be attached to a VM.
Just right click on the VM, click "Edit Settings..." and when you click "CD/DVD drive 1", you should probably see "Datastore ISO file" selected on the right side. So change this to "Client Device" and then press OK...
Now you can remove datastore successfully.
King regards.
Thanks that was exactly my case - so easy but such an impact
in my case its a FC channel i moved.. so go to storage adapter -> choose right vmhba in the fc list... in details right click the dead fc channel and unmount... THEN go back to storage and rescan. et voila !
Try using below article if it resolve your issue.
Used the following command to find all the VMs with attached CD ISOs:
Get-VM | FT Name, @{Label="ISO file"; Expression = { ($_ | Get-CDDrive).ISOPath }}
Try below steps to fix this issue.
Connect to each ESXi host to which the LUN is presented by using SSH.
Run this command to stop the SIOC service:
/etc/init.d/storageRM stop
In the vSphere Client, select the host and then click the Configuration tab.
Click Rescan All.
After the rescan completes, run this command to restart the SIOC service:
/etc/init.d/storageRM start
Note: If the issue persists, put the affected ESXi host into maintenance mode and then reboot the host.
In my case it was a syslog defined on several ESXi hosts directing to the undetectable DS.
check Syslog.global.logDir