Styvboard
Enthusiast
Enthusiast

NFS disk between different VCenters

Jump to solution

Got one VCenter 5.5 and a new one, VCenter 6.0. For migrating purposes I would like to share a NFS disk between one host from each VCenter. Is that possible?

I've shared NFS disk between hosts in different clusters but that was in the same VCenter, that worked fine. But maybe not between hosts in different VCenters?

Got the NFS share up and running and connected to one server in VCenter 6.0 environment, and I can create folders. Using NFS3. On host in VCenter 5.5 I have connected the share but get error when I try to create a folder.

Tags (2)
0 Kudos
1 Solution

Accepted Solutions
peetz
Leadership
Leadership

I think there are two possible reasons:

- When mounting an NFS datastore on an ESXi host you can select to mount it "Read-Only" or not. Check if you accidentally mounted the datastore in read-only mode.

- Check the permissions of the NFS exports (on the server offering the NFS share). Have you accidentally exported it with read-write access to some ESXi hosts, but read-only to others?

Twitter: @VFrontDe, @ESXiPatches | https://esxi-patches.v-front.de | https://vibsdepot.v-front.de

View solution in original post

5 Replies

For the sake of argument, I am going to assume by "NFS disk" you mean NFS datastore.

Yes, you can do this but just like hosts in different clusters, it is not recommended. You should only have hosts connecting to datastores with hosts in the same cluster on the same vCenter. But you can absolutely do it.

Doug

If you found this reply helpful, please mark as answer VCP-DCV 4/5/6 VCP-DTM 5/6
0 Kudos
Styvboard
Enthusiast
Enthusiast

Yes, and there's a difference between a Volume, Partition and Disk in a windows machine also. Now that we got that sorted, any suggestions why it's read/write on host in one vcenter and read only on the host on the other vcenter?

0 Kudos
peetz
Leadership
Leadership

I think there are two possible reasons:

- When mounting an NFS datastore on an ESXi host you can select to mount it "Read-Only" or not. Check if you accidentally mounted the datastore in read-only mode.

- Check the permissions of the NFS exports (on the server offering the NFS share). Have you accidentally exported it with read-write access to some ESXi hosts, but read-only to others?

Twitter: @VFrontDe, @ESXiPatches | https://esxi-patches.v-front.de | https://vibsdepot.v-front.de
Styvboard
Enthusiast
Enthusiast

Hi, thanks for answering. I've tried to disconnect one host from old vcenter and connect to a new cluster in new vcenter, only this one host in that cluster. I can then use "Mount datastore to additional host". So, hosts are in same site but different clusters. In settings for the NFS datastore I can see all hosts in Connectivity with hosts, they're all R/W. Mounted disk on an additional host and just discovered that I can't create a folder or file there either. That host is also ESXi 6.0 but in other cluster.

I've got a mixed Environment at the moment, two sites, 2 clusters in each, 3 of the 4 clusters are running 6.0, one is on 5.5. One site is Enterprise Plus and Intel and the other is Enterprise license and AMD.

NFS Share is from EMC VNX. Not finding any settings to alter there...

0 Kudos
Styvboard
Enthusiast
Enthusiast

NFS Share was set up some time ago by SAN technician and didn't really remember any settings but did a double check and of course there was a list of allowed IP for read/write access. That shouldn't have been a problem cause all servers were there but with the wrong IP address for my old ESXi servers. A vmkernel was setup afterwards with IP address in the same scope as NFS. That must be the best way right? Seems like a bad idea to have NFS attached on a routed link?? Feels like I will need to add a vmk on the new servers as well in the NFS scope. As for now traffic is routed through firewall and we got some serious issues with that traffic. I edited settings as mentioned above and I got the read/write access. Tried to vmotion from old hosts, worked fine, but when I started on new host with routed NFS access I got horrendous latency, it works but this is no good way to migrate servers. Thanks for your input, one problem solved, peetz

Trying to give you some creds here but it's not working... try again later.

0 Kudos