Good morning,
I have built a test environment to test ESXi 5.5 u1 at work. I am able to mount the datastore, browse the datastore, but can't write to it. I mounted to another production host and am able to read/write just fine.
This occurs on both servers in this test setup. I should add, they used to be ESX 3.5 hosts and had no issues with NFS datastores. I used the same NIC adapters to create similar vSwitches. VLAN trunking on our physical switches has not changed. Both test servers have read/write/root access for the NFS mount in our SAN (same IPs as before and it worked).
Any suggestions would be appreciated.
Thank you
Okay looks like you got most of the standard things cleared away here.
I did stumble across this blog post:
http://emcsan.wordpress.com/2011/09/01/vmware-cant-write-to-a-celerra-rw-nfs-mounted-datastore/
Which seems to point to some extra permission issues with the EMC Clariion that need to be set to make NFS work in the later versions. Give this a whril on your new hosts and let us know.
Hi,
vSphere 5.5 U1 has NFS APD issue. Please check this article and if you find the mentioned errors, you have to apply the related patch.
BR
Not seeing any of those APD errors in the log file, thanks for the link though.
Hi,
If you are using NFS share via Windows Server, please consider this: Configure NFS Shares for Anonymous Access
May be, the share access is not configured correctly.
BR
It seem its due to some permission issue on NFS server ..
I think you need to give permission to host where this problem is cumming.
The NFS mount resides on an EMC Celerra so no NFS Windows share.
The 2 hosts have root access to the NFS mount, as I used the existing NICs from when they were old ESX 3.5 hosts and had no issues accessing NFS mounts.
Still haven't figured this out. The hosts have root access.
Including a screenshot of my vSwitches, putty session for the storage array showing the NFS mount, and root permissions for the host.
edit: forgot to add, I cleared the IPs in the read only box.
Hey Chris,
This may be a silly question but you can ping the NFS networks from that host. So if your on the CLI on that host can you do a vmkping 10.1.1.10 succesfully?
Also as wierd as this is, try putting just the one IP into each permission box minus the read only box and see if you get the same issue. It could be a formatting issue on the permissions box when your entering multiple ip's. Its a far shot but see what happens. I have seen it where sysetms will take a format that isn't supported but it botches the permissions. IE(10.1.1.10:10.1.2.10 however the SAN watned it in 10.1.10,10.1.2.10 but accepted the : anyways)
JPM300,
I appreciate the suggestions. I can ping the the NFS interface which is 10.1.1.1, see below. The 10.1.2.x addresses are for the secondary data mover, just as an FYI.
The formatting is correct as I actually copied them from another NFS mount, but I can try a single IP just to confirm.
I have 9 NFS mounts that are connected to a ESX 4.0 cluster and also an ESX 3.5 cluster.
The two hosts I am working with used to be 3.5 hosts, but I wiped them and reinstalled 5.5 to test.
I can mount the NFS to any of the old hosts and am able to read/write. Unmount it and add to the new 5.5 host and I can't write.
Okay looks like you got most of the standard things cleared away here.
I did stumble across this blog post:
http://emcsan.wordpress.com/2011/09/01/vmware-cant-write-to-a-celerra-rw-nfs-mounted-datastore/
Which seems to point to some extra permission issues with the EMC Clariion that need to be set to make NFS work in the later versions. Give this a whril on your new hosts and let us know.
Great link! I had seen one similar (with the no_root_squash) and tried that. What I hadn't done was enabled the NFS client in the firewall and that seemed to do the trick! I am storage vmotion right now off local storage to the NFS datastore and it is running through.
Great to hear its working well now
This is the link I was looking at. In case anyone ever needs it.