I have a pair of SUN X4100s directly connected to an EMC NS20. It's connected directly instead of going through switches for political reasons. Anyway I have my NFS filesystem presented to both nodes in my ESX cluster. Each has different IP addresses but both nodes can see the same NFS share. My issue is when i create the first nfs datastore on node A, it works fine. When I create the same datastore name on node B, it appends a "(1)" to the name. When i then do a live migration from Node A to Node B of a VM, i get the error: "Unable to access file test/test-xxxx.vswp" and the same for the vmdk file. Well on Node B, that datastore name is "nfsdatastore (1)" so that make sense that that name doesn't exist on the other node i'm trying to vmotion to. So my question is, how do i keep the names the same? When i attempt to change the name on Node B to make it the same as Node A, it say's it already exists. My only theory at this point is it has to do something with the direct connect of NFS verses going through a switch. Thoughts/suggestions? Thanks in advance.
It looks like the storage is only being seen by one host at a time, and not shared.
The (1) comes from vCenter seeing all the storage (through the hosts), and only allowing 1 datastore to have a unique name. This also happens when trying to name local storage on 2 boxes the same. It doesn't have anything to do with which host can see it or not.
I am not familiar with an EMC NS20, but it would seem that you need to modify mapping/zones, to where both hosts can see all the created storage.
This may not be possible with direct connects, but again, I'm not familiar with the EMC NS20.
Co-Author of VMware ESX Essentials in the Virtual Data Center
(ISBN:1420070274) from Auerbach
Please consider awarding points if this post was helpful or correct
The funny thing is if i remove both nodes from VC, then I can name each datastore the same name. As soon as i import them into VC, one of the names changes and adds the "(1)" to the datastore name.