Hi
We have just finished our esx4 to esxi 5 migration and we are now in the process of upgrading our datastores to vmfs5. To upgrade we do not just hit the upgrade button, but completely destroy the lun and recreate it as a VMFS5 datastore. We have tried to use the below KB article to unpresent the LUNs from the hosts, but when we follow this article we still see the datastore on the server and the device listed under devices. It is only when I go in to the GUI (which is what I dont want to do), go to Storage, click on device and then manually click on Detach that the datastore disappear and the device listing goes grey and italics.
Can anybody please explain why this is and what steps are required to completely remove the datastore from the host.
When you are upgrading LUNs to VMFS 5, you don't "need" to unpresent them. You can simply go to the datastore screen, right-click and select delete(OBVIOUSLY after you have moved everything off of it.) After it is deleted you should be able to recreate it just as you would create a datastore normally.
I hope that answers your question..... Also I wrote a script a month or so back that will automatically do these kind of upgrades for you...
Check it out here
Hi
Thanks for this. I am aware that we dont need to unpresent the LUN's, but we use compellent storage which does not recognise the zeroing instructions from ESXi5. This means that the Compellent storage just fills up evewn though we are not using all of the space.
KB
What does the status of the LUN state after you have detached it from the cli?
-KjB
This is the output and the status is off.
naa.6000d31000275900000000000000074d
Display Name: COMPELNT Fibre Channel Disk (naa.6000d31000275900000000000000074d)
Has Settable Display Name: true
Size: 512000
Device Type: Direct-Access
Multipath Plugin: NMP
Devfs Path:
Vendor: COMPELNT
Model: Compellent Vol
Revision: 0504
SCSI Level: 5
Is Pseudo: false
Status: off
Is RDM Capable: true
Is Local: false
Is Removable: false
Is SSD: false
Is Offline: false
Is Perennially Reserved: false
Thin Provisioning Status: yes
Attached Filters:
VAAI Status: unknown
Other UIDs: vml.02009600006000d31000275900000000000000074d436f6d70656c
What about if you run the below:
esxcli storage nmp path list --device=naa.6000d31000275900000000000000074d
esxcli storage nmp device list --device=naa.6000d31000275900000000000000074d
-KjB