I have three physical servers. Two of these servers are setup in a cluster. These is an iSCSI connection to my SAN with two datastores (VM Storage and SSD Storage). Both datastores show up and connect fine to the clustered servers.
The standalone server connects to the VM Storage datastore, but not the SSD Storage datastore. Both datastores reside on the same NAS. The VM Storage datastore populated automatically once I added the iSCSI network adatper...im not sure why the other datastore is not attaching. In the screenshot below...it is the 3.72TB disk.
Resolved the issue by enabling SSH on the ESXi host and running the following commands and mounting the datastore manually.
list data stores with :
esxcfg-volume –l
Assuming its there, mount with:
esxcfg-volume –M <UUID>
Resolved the issue by enabling SSH on the ESXi host and running the following commands and mounting the datastore manually.
list data stores with :
esxcfg-volume –l
Assuming its there, mount with:
esxcfg-volume –M <UUID>
Thanks, this worked for me with QNAP.
Make sure to use the UUID part of the output from the line "VMFS UUID/label:" only.
Symptoms we had observed were that some of our hosts didn't have all of their expected Datastores provisioned. They would show as "Attached" but "Not Consumed" (seen at: "<hostname> \ Configure tab \ Storage Devices"). The "Attached" label informed us that the Exports were all good. After placing the host into Maintenance Mode and rebooting, the host came back with the missing Datastores now "Attached" and name of the Datastore shown.
note: if you use vcenter you can not do what said above. you can do it by using web browser (for newer esxi) and vsphere (for old ones) and in Configuration>storage> add storage do what should you do!
esxcfg-volume –l
this command does not show any volumes to mount.
has anyone any other solution ?
regards.
Najeeb