Sorry if this in the wrong place.
I am using an cephfs storage back end to store my VM info on my one node esxi servers testbed environment.
I have linked the two (ceph and esxi server) using NFS 4.1 as i want to add failover, i have two NFS servers set up, linking to the cephfs and added the NFS to the esxi as nfs 4.1 with both ip address.
I have found two problems i am trying to get my head around
1) when i try to do an fail over (shutdown one server), the Datastore shows as (inactive) and does not switch over to the other server
2) Due to problem 1 i powered the server back up and can see the server is live and is accepting NFS connections but Vsphere is still showing the Datastore as (inactive) and only getit working by destroying the datastore and recreating it.
If i set up the datastore as an NFS 3, and run the same test, the fail over still dont happen (as NFS 3 dont support it) but when i turn on the server again the datastore starts working.
Am i doing something wrong with my set up, with the failover/mutilepath
What you are looking to do is pNFS. That is not supported by VMWare right now. The multiple IPs that you can enter for an NFS 4.1 share must all go to the same server and folder on that server. The multiple IPs are useful if you want to do something like iSCSI's IP Multipathing.
Thanks for the info TOM_CFFP, it really interesting.
Can you advise any way of doing what i am trying to do, NFS failover (2 servers) at ESXI end?