VMware Cloud Community
PatrickWE
Contributor
Contributor

Inactive NFS storage

Hi,

How do you activate an inactive NFS storage in ESX 3.0.1

We had a complete(planned) site power shutdown yesterady, and when booting back up, it seems that we forgot to bring back our NFS server before the ESX servers were powered-on.

The NFS server is now up and all is running, but in our ESX server storage configuration, it shows NFS1 as being inactive. We don't want to delete the storage and recreate it. How do we activate it without re-shutting down our ESX boxes...

Thanks very much \!!!

13 Replies
ZMkenzie
Enthusiast
Enthusiast

You don't need to shut down anything, you can disconnect and reconnect your storage in this way (from console):

esxcfg-nas --del nfsname

esxcfg-nas --add nfsname --host nfsserver --share nfsexportname

You can also do this from virtualcenter interface, just remove nfs and then reconnect it.

Anyway, esx 3 should reconnect nfs storage automagically when it's available again, so, this is a little bit strange.

Anyway, hope this helps.

Reply
0 Kudos
mphodge
Enthusiast
Enthusiast

What do you do when the NFS mount has gone inactive and there was a VM up and running using that storage space? I get this...

\# esxcfg -d

Error performing operation: NFS Error: Unable to Unmount filesystem: Busy

In the VI client it's marked inactive and I can't get a console to the VM as the config file does not exist!

Also, I successfully removed and added another NFS mount from the console, but in the gui it's still marked as inactive for some reason?

Reply
0 Kudos
dominic7
Virtuoso
Virtuoso

Perhaps your firewall settings need to be tweaked? Maybe the reboot set them back to a state where they are blocking your NFS mounts?

Reply
0 Kudos
stvkpln
Virtuoso
Virtuoso

At the command line as root, type esxcfg-nas -r. That'll reconnect to the NFS store. Then type esxcfg-nas -l, which will show you if the NFS volume is remounted properly.

You may also need to perform a service mgmt-vmware restart to get VC to see things properly, as well.

-Steve
Texiwill
Leadership
Leadership

Hello,

Also, verify that the NFS server is working fine. When a disconnect is made it tries to communicate with the NFS server. If it can not do so, it should not fail to disconnect. But if the communication is garbled perhaps it will.

Verify what you are receiving on your NFS Server.

NAS/NFS Datastore vmkernel portgroup does not require changes to the Service Console firewall to be used.

Best regards,

Edward

Message was edited by:

Texiwill

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
Reply
0 Kudos
mphodge
Enthusiast
Enthusiast

I am aware that you have to change the firewall to allow iSCSI, but not NFS?

It was my VC VM running on the NFS share, therefore I could not shutdown the VM. Also logging into the ESX host directly I could not shutdown the VM because the .vmx file could not be accessed as it was on the shared NFS storage that could not be accessed!

So I had to SSH into the ESX host and kill the vmware process, which is not good because the VM could end up corrupted... but this was the only way I could remove and reconnect the NFS share without it complaining 'already in use'.

My NFS server was definitely working fine because another client was connecting to the exports ok.

Reply
0 Kudos
Texiwill
Leadership
Leadership

Hello,

I see your issue, it sounds like there was an outstanding NFS lock on the share due to the running VM. THe lock was most likely on the VMDK. Any outstanding lock would keep the share from being remounted. I think you did the only thing you could do in this situation.

Best regards,

Edward

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
Reply
0 Kudos
flintj
Contributor
Contributor

what process did you kill? I am stuck in the same situation

My NFS server was definitely working fine because

another client was connecting to the exports ok.I am aware that you have to change the firewall to

allow iSCSI, but not NFS?

It was my VC VM running on the NFS share, therefore I

could not shutdown the VM. Also logging into the ESX

host directly I could not shutdown the VM because the

.vmx file could not be accessed as it was on the

shared NFS storage that could not be accessed!

So I had to SSH into the ESX host and kill the vmware

process, which is not good because the VM could end

up corrupted... but this was the only way I could

remove and reconnect the NFS share without it

complaining 'already in use'.

Reply
0 Kudos
mphodge
Enthusiast
Enthusiast

I killed the process running my VM - you can find out which one using ps -ax and greping for your VM name...

Reply
0 Kudos
titaniumlegs
Enthusiast
Enthusiast

Sorry about the late reply. I was looking for something else and stumbled on this thread. Did you ever try:

esxcfg-nas -r

?

Share and enjoy! Peter If this helped you, please award points! Or beer. Or jump tickets.
Reply
0 Kudos
okossuth
Contributor
Contributor

I'm having the same problem. i have 2 openfiler NAS using drbd and heartbeat for HA that give NFS shares to 2 ESX 3.5 servers.

When i do a failover between the NAS, it works correctly, but the ESX servers loose their connections to the NFS datastores, and

i can't delete them using esxcfg-nas -d because the vms that are stored in the shares are running.

Is there anyway to force the deletion of a NFS datastore from the ESX console?

Thanks

Reply
0 Kudos
rcn
Contributor
Contributor

Looks like a new patch was released for 3.5 that fixes Inactive NFS mount after a reboot especially if the NFS mount is referenced via DNS hostname:

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=101766...

Specific info from KB:

After restarting an ESX host, some NFS datastores might appear

inaccessible through the user interface although they are mounted and

accessible through the command line interface. This issue is more

likely to occur if an NFS datastore is configured using a host name or

a fully qualified domain name (FQDN) that requires DNS name resolution.

Reply
0 Kudos
KosmicSuture
Contributor
Contributor

I have a setup with a Windows 2012 server configured as a NAS for my VMWare servers (it's all free equipment, so I do what I can with it).  If I'm ever doing maintenance where I have to take down the Windows NAS and the ESX boxes together, the NAS never reconnects automatically.  I've left it for hours and it just won't connect.

Diztorted's suggestion worked perfectly for me.  The NAS reconnected immediately and was accessible as it should be.  I'm not entirely sure why it isn't reconnecting by itself, but I'm completely happy with this workaround, especially since I don't restart the system often.  Smiley Happy

Reply
0 Kudos