VMware Cloud Community
AWarwick
Contributor
Contributor

nfs HELP!

I need assistance with an nfs issue.  I have three HP servers connected to an iomega IX4 and an iomega PX4.  The connection to both storage devices is NFS.   All hosts are running vSphere 5.  This was a working configuration a few hours ago.  This is my home lab but it is running my AD and Exchange servers.

This evening we had a power event that caused the UPSs to fail and everything went down.  After that I powered up the storage and then the hosts.  None of the hosts can see the nfs datastores.  Neither nfs store will mount on any host. I have replaced the network switch and I can ping all addresses but still no mounts.  I unmounted one of the datastores and tried to re-add it.  No luck, says it cannot connect.

Can anyone shed some light on this.  I can get into the storage on the CIFS side and see the nfs share, so it's responding.

I am a little desparate as all systems are down.   

Tags (2)
0 Kudos
9 Replies
Dave_Mishchenko
Immortal
Immortal

Have you looked at the vmkernel log file after trying to connect to the datastore.  Here's some info on getting access to the log files -

kb.vmware.com/kb/2004201
0 Kudos
AWarwick
Contributor
Contributor

I've looked through the logs and I see the connection attempt and the reply "unable to connect to NFS server". Nothing more than that.  Not very helpful.

I've also done a vmkping from one of the hosts and response times are about 0.200 ms.

0 Kudos
a_p_
Leadership
Leadership

Did you already double check the storage system (e.g. shares, permissions, ...)?

André

0 Kudos
AWarwick
Contributor
Contributor

I did.  In fact there were no permissions on one of the storage units.  Bear in mind this was all working a few hours ago until the power outage.  In that time nothing was changed.

SO, here's the interesting bit..........  I'm back up and running.  The NFS mount on the PX4 connected as soon as I restored an AD VM from backup.  The IX4 connected once I changed the security to use AD instead of being wide open and rebooted it.

1     Now how does AD  affect NFS shares?

2     The shares on both devices were set for access from everybody.  Read & Write.

3     Why after a power out would the access ability change?  This systems has been working for 6 months and had been up and down before.

4     Do I have a vGremlin?

Thanks for the assistance.

0 Kudos
bernardP
Contributor
Contributor

Hello,

may be you should check the followings : I think you must connect to your NFS shares using root ? Probably your NFS server has lost this settings so it cannot accept non root user to connect to its shares. Because your are in AD, you're trying to access to your NFS shares with a different account for example an AD user.

You should re enable root access on your NFS server and probably enable telnet . I had a similar issue with my NAS (NetGear). Let me know

Regards

0 Kudos
cmcminn
Enthusiast
Enthusiast

Just for grins, try and create an iSCSI DS from the Iomega devices to see if you are/are not having storatge issues.  You can delete it directly after, but it would eliminate on of the variables in your equation...

0 Kudos
AWarwick
Contributor
Contributor

Just for grins I'd like to tell you about my iSCSI experiences with the PX4 Smiley Happy .  Of the three HP servers the two DL385s would lock up the PX4 without any hesitation.  In  fact the NFS is doing that also now.  I've had so many issues with the PX4 I'm now sure it's a POS.  In fact a replacement one was no better.  I've been working with IOMEGA and EMC for months on this.  We still haven't managed to pin down the cause.  I'm waiting for a new code release right now before we go on with the dianosis.

So excuse me if I don't follow you're suggestion but I do appreciate it.

0 Kudos
Abiloye
Contributor
Contributor

Oh my God!!  I thought this was my posting when I first read your issue here..  I had the same issue.  I have been working on my VMware Lab for more than two years, same configuration and setups.  One night, I shut down the 2-Host Cluster at 22:00 hrs and went to bed.  I did not turn off the iOmega StorCenter 1x200 Datastore mapped with NFS to the vmware infrastructure.  There was thunderstorm overnight.  When I turned on my ESX hosts, I realized I could not get to any of the VMs in the datastyore.

In an effort to troubleshoot it, I deleted the mapping /nfs/datastore and tried to remap it exactly, but unable to get this resolved.  It took me three weeks to be able to resolved it.  The issue with mine was the port at the back of COMCAST router was bad.  I changed the router and re-configured it as it was at the beginning. YES...I was able to re-map it to the store using the same /nfs/datastore.

My main problem now is that I cannot access all the VMs on the datastore because they all grey out.  I believe it is security issues on AD side, not the storage side because I did not put any security on the NAS.

In case anyone knows how to resolve the issue (probable with VCenter), I will appreciate the help.

0 Kudos
JPM300
Commander
Commander

This sounds like the NFS server daemon or client on your storage is not working properly or passing the permissions properly.  We had a simular issue with a Windows Server 2008 R2 system that was running NFS just fine, then we upgraded the Key from Standard to Enterprise.  We did this through the GUI which was a mistake.  The GUI said all was well, the system was showing Win2k8R2 Enterprise, however it was constantly giving NFS is not a registered feature error.  Then when we rebooted the system file shares started acting funny.  It turned out the GUI didn't put the new Key in properly.  Once we re-set the key via the cli, it worked fine and NFS started up properly again, however prior to this all the NFS services where running, the persmissions where all proper ect ect, however VMware would just not connect the NFS share.

Take a good look at your storage and make sure the NFS client is running and serving properly as it sounds like your SAN is not serving NFS properly.

Also if you have unmounted your NFS share trying to fix the issue make sure your mount it EXACTLY the same, so if you mounted your NFS share as 192.168.2.100/NFS1  make sure you mount it the same way as if you change the name in any way it will mount the NFS share with a different SID and VMware will treat it as a different NFS share even though it has the same data.

Hope this has helped.

0 Kudos