finally found the problem,
mounting the datastore I was using FQDN.domain.com /Datashare
I needed to use FDQN.DOMAIN.COM /Datashare
The caps in the FQDN made the variance in the symbolic name that caused the problem .
can you explain just a bit more? I've a site where this has happened, and we just have not had time to deal with it. My questions/clarifications:
1)so are you saying, that when you create the NFS mount on each ESX host, that if you used ALL CAPS in the FQDN, then ... somehow each datastore name did not get the (1) appended after it?
2) How could making this change have that effect I wonder? I mean - the names are still the same on each host, correct? (I know this would be speculation ....)
all my existing host had the same datastore name "NFS" and my new host that was added the same share showed up as "NFS(1)" . I determined after going through each of the existing host and looking at the NFS share mount that each one of them listed all caps under the configuration tab for storage. When I was added the NFS share to the new host I was not using all caps. Apparently the database is very very detailed about how to retains these symbolic links to objects. All I had to do was make it the exact same format as the rest of the host and it appears correctly.
I spent a lot of time on Google researching this and found lots of related issues, but nothing that specifies the details needed when adding NFS shares to newly added host in a existing environment. I am glad i found out what it was, I should sleep better tonight
the existing host listed my NAS unit as NPIATXNAS1.AUSTIN.NPI:/Data the share was /Data and the name of the datastore i gave it is NFS
When I added the new host and tried to add the NFS share I used NPIATXNAS1.austin.npi with /data and NFS as the name and it appended the (1). Hope that helps
excellent, yes very helpful!
I'm glad you are sleeping well tonight, and glad that I can look forward to a "simpler than I imagined" fix when I get to it!
Ok, This was driving me nuts because my problem matched those discussed here, but the solution didn't work for me. I did finally figure out how to fix it though.
First off, it seams like the description about it being referenced someplace is correct. I.e. that esxi is preventing a rename or creation because i thinks there might be a datastore by the name used already there. In my case this was with NFS, where I was naming the datastore identically on each host. One host still had it and it was working, the 2nd host did not list it, and if I tried to add it with the same name it would append a (1) and not allow a rename (or depending on the method used to create it, it would actually refuse to create it).
My solution? well, I didn't find out what was referencing it, but since vCenter was trying to keep things correct/safe/etc, I used that and theorized that if I renamed the existing working datastore on all hosts that it would change the item I couldn't find to also point to said new name. As a bonus, since vCenter does everything live, this didn't affect anything in production or require any kind of reboot.
To be clear/simple, here is what I did to fix it:
1. Delete any datastore name that I was trying to create (for instance any that are not yet used and were/are getting a (1) appended to the end).
2. Connect to the vCenter Server with the vSphere client, and select Home>Inventory>Datastores and Datastore Clusters
3. Find the WORKING datastore with the same name giving you difficulty and rename it (i.e. append the word TEMP to the end, which is what I did)
4. Immediately (or if your cautious, like me, wait a few mins to ensure changes have made it/replicated/etc) rename it back to the original name
5. Go to the host giving you difficulty and try adding the datastore back, if your error was similar to mine it should allow you to add it without error or appending a (1)
Depending on the situation you might reverse steps 4 and 5's order, however I haven't run into this more than once to envision when/why.
That all being said, I did this on a non-critical server that I could have just turned off without issue, you may take my word that it didn't affect production but that won't do you much good if it blows up in your face, so take the right risks for your environment. (Obligatory YMMV)
I had the same behavior when adding nfs datastore via powercli and noticed that I forgot the trailing / on the -path parameter (it correspond to the field "Folder" on GUI).
I had to use /<path-to-nfs-export> and not <path-to-nfs-export>
For me, confusion came from the fact that with netapp cDOT, /vol/ is no more the base export path.
I had a similar issue like this mounting our NFS storage . This is what I did to resolve the issue and I'm hoping it'll lead you to resolving your specific issue if it's not exactly like mine:
The datastore name it's complaining about that already exist "does" in fact exists. What you're trying to do is tell the ESXi host to use that existing datastore that is already in inventory instead of creating a new datastore inventory with essentially a new name. On the new host you're trying to mount an already existing datastore to, you must make sure "everything" is exact, especially the "folder and Datastore name" part. When I say exact, I'm talking even the forward slash "/"
My issues was this:
When I mounted the datastore for the first time I configured it as shown:
Datastore Name: netapp_control_a_sql
Now when I try to do this on another ESXi host, I had it configured like this:
Datastore Name: netapp_control_a_sql
All is gravy right? until vCenter appends (1) to the datastore name and inventories this as a different datastore. What do you see wrong there? the issue here is the ending forward slash "/" in the folder field that was included on the initial config and the additional config was missing it. I included the missing forward slash and the problem was resolved. I can imagine this being an issue with caps or anything differentiating the initial mount and the referencing mount.