Can you do ls -allh via SSH in the datastore? So, in steps
1) Connect via SSH to the host that will not unmount
2) do cd /vmfs/volumes/<name of your datastore>
3) do ls -allh
And share the output.
1 person found this helpful
~ # ls -allh /vmfs/volumes/san_lun3/
drwxr-xr-t 1 root root 1.1K Oct 27 17:10 .
drwxr-xr-x 1 root root 512 Oct 27 18:15 ..
-r-------- 1 root root 5.2M Jun 14 2012 .fbb.sf
-r-------- 1 root root 254.7M Jun 14 2012 .fdc.sf
-r-------- 1 root root 1.1M Jun 14 2012 .pb2.sf
-r-------- 1 root root 256.0M Jun 14 2012 .pbc.sf
-r-------- 1 root root 250.6M Jun 14 2012 .sbc.sf
-r-------- 1 root root 4.0M Jun 14 2012 .vh.sf
Those would be VMFS meta files so quite frankly there shouldn't be a reason not to remove this datastore. Is it possible that the the datastore on the other host is used as a swap datastore? Configuration -> Software -> Virtual Machine Swapfile location? Or perhaps a log directory (although there should be log files present then) Configuration -> Software -> Advanced Settings -> Syslog?
You could also try the unmount while tailing /var/log/vmkernel.log (tail -f /var/log/vmkernel.log)
1 person found this helpful
Your response led me to this post:
Which leads me to few processes which seem to have files locked on san_lun3 - even though the VM's have been motioned off the datastore.
One of the issues is an open KMS session from a system that is not active (someone else has a console window open to one VM - I can't find the session or the user, I'd like to just kill all console sessions to this VM if I can.
13393497 13392045 vmx-mks:SEPM-Win7 /bin/vmx
13393498 13392045 vmx-svga:SEPM-Win7 /bin/vmx
13393499 13392045 vmx-vcpu-0:SEPM-Win7 /bin/vmx
The other issue I see are some files related to another VM which has been motioned off the datastore:
6458291 6458291 vmx /bin/vmx
6458297 6458291 vmx-vthread-5:DC1 /bin/vmx
6462836 6458291 vmx-vthread-6:DC1 /bin/vmx
6462837 6458291 vmx-mks:DC1 /bin/vmx
6462838 6458291 vmx-svga:DC1 /bin/vmx
6462841 6458291 vmx-vcpu-0:DC1 /bin/vmx
6462842 6458291 vmx-vcpu-1:DC1 /bin/vmx
If you are certain no VM's are running on the host in question, you could do a kill -9 6458291 && kill -9 13392045 to kill the proces. Then (if the VM didn't die ) try unmounting again.
I found a less intrusive method:
for the 2 "sessions" I noted above, I vMotioned the HOST (not datastore) to another host, then back again.
As soon as I motioned to a different host, the KMS files disappeared from the list of "locked" files/processes.
I thought I had it nailed - the host which has *NO* kms sessions or anything looking like the previous post - I am still unable to unmount or delete the datastore.
From host2, the unmount still gives me the red X on "no VM files".
Both hosts have a *bunch* of these (I can provide the whole list if that may help):
918368 9135 hostd-worker hostd
918369 9135 hostd-worker hostd
918370 9135 hostd-worker hostd
918371 9135 hostd-worker hostd
9135 9135 hostd-worker hostd
9136 9135 hostd-poll hostd
9137 9135 hostd-worker hostd
9138 9135 hostd-worker hostd
9921 9135 hostd-worker hostd
9923 9135 hostd-worker hostd
9924 9135 hostd-worker hostd
9988 9135 hostd-worker hostd
9989 9135 hostd-worker hostd
10026 9135 hostd-vix-worke hostd
10027 9135 hostd-vix-worke hostd
What you could try is tailing the vmkernel.log located in /var/log when you're doing the unmount. Perhaps that gives some pointers.
tailing vmkernel.log yields *nothing* when I hit "Dismount".
The box with red X for "no VM files" pops up immediately.
I also tried to go straight to Delete datastore, and nothing was logged in vmkernel.log.
I've restarted the management agents as well, thinking there might be something "locked" via the management agent.
Still no love on deleting this datastore.
You could try running services.sh restart to ensure every daemon is restarted, or even reboot the entire host. I think there is a process locked on this datastore, but it's hard to verify remotely.
we had some issues with that in the past...
Are you using a backup proxy that could have the datastore mounted, or locks not released...
In the past we have had to reboot the proxy or do some other stuff to do that...
I had this exact same issue on 5.5 U1 and I worked with support and did not get an answer. I had to reboot all 10 hosts in one of my clusters to be able to unmount the datastores. Support was not helpful. They could not find what was locking the LUNs.