On our ESX3 host systems, we keep seeing a "warning, maximal mount count reached. running E2Fsck is recommended."
Why are we seeing this message? I'm fairly sure I know what it's getting at, but why is this occurring? Does everyone see this or is it just us? Our ESX server volumes (the non-VMFS ones) are all running off Smart Array 6i controllers, if that helps.
Apparently this error is harmless according to tech support. Just run tune2fs to fix it.
Its a linux thing, it means that the mount has been unmounted/remounted X number of times. When you create a filesystem there is a flag that says if the slice is mounted X times, it will fsck it for you when the server comes up.
Also, to check out the info on the filesystem to see what the settings are, when it was last mounted, etc , etc
tune2fs -l /dev/
I know, but why is ESX mounting and unmounting the filesystem constantly? I just want to verify that this is expected behavior and this isn't something our server is doing that's all weird.
I'm using the 6i array controller and do not see that.
How often does it happen? I am assuming you see this in /var/log/messages? Can you post a few lines of the error? Does it indicate which filesystem? If the box is staying up and the error happens reguarily it may be something small.
I haven't even looked at /var/log/messages, but it shows up on the Primary console screen. It doesn't reference which filesystem.
What I pasted is pretty much the entirity of the error, here it is verbatim:
EXT2-fs warning: maximal mount count reached, running e2fsck is recommended.
that's it, just over and over again on the service console.
We have 3 DL385 G1's and this is happening on all three of them. We've done reinstalls of the ESX OS and it always has happened. I just thought I would ask if anyone else had seen it.
This is the response I received from a Dell technician on how to clear the error:
\----
You need to run the command tune2fs on the partition. I just performed tune2fs on a mounted system without incident on a RHEL4 system. This article would have been for RedHat 6-ish.
To find which partition needs to have the mount count changed use
df
to list all the partitions then run
dumpe2fs /dev/YourDev | grep Max
to see what the max mount count for a given partition is. Replace YourDev with sda, sda2, etc. Change that count using the following
tune2fs c 100 /dev/YourDev
where 100 is some arbitrarily large number. The mount count need not be the same for all partitions. In fact, they should all be different to keep from having a check forced on all partitions at the same time.
If you run e2fsck, be sure to unmount the partition first.
\----
I've tried this successfully on my ESX3 server, hope this helps. I still haven't found out what the original mount count settings were, so I have no idea if this is normal, just the result of the recent upgrade, or an actual issue.
FYI...
The Maximum mount count on all partitions of my ESX 3.0 servers is -1.
The Maximum mount count on all partitions of my ESX 2.5.x servers is -1.
The Maximum mount count on all partitions of my FC4 servers is -1.
The Maximum mount count on all partitions of my RHEL 3.x servers is -1.
These are apparently the default settings, as I have not changed them.
I am getting this as well when I busy my system that is connected to an HP EVA, the cluster is also connected to a DMX3000 as well...The DMX doens't drop off, the EVA does....very odd
My 4 ESX servers are connected to a EMC DMX3, they are all booting off the SAN and I get this message on one of then. It started occuring whilst I was extracting the ESX 3.0.1 upgrade to /var/update.
I suspect that the partition ran out of space. I will run this tune2fs and see whether it fixes the problem
I am getting these when I VMotion a lot of servers, any luck resolving your issue?????
Nope, all three of our boxes continue to have these error messages on their consoles. So far we haven't seen any side effects but I'm still wondering why they're coming up.
We are getting this error on both of our ESX 3.01 boxes...Dell 2950s. VMware tech support has suggested a format of one of the partitions and then an ESX re-install but I am extremely reluctant to do that...I can recreate the error by vmotioning a vm.
Apparently this error is harmless according to tech support. Just run tune2fs to fix it.
It's a known problem in e2fsprogs.
ESX 3.0.1 uses e2fsprogs-1.32-15.1 and it was fixed in e2fsprogs-1.35-7.1.
Fri Apr 09 2004 Thomas Woerner <twoerner@redhat.com> 1.35-7.1
- fixed 'check after next mount' for filesystems with maximum mount count -1
(#117109)
https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=117109
Cheers.
We are getting this error on both of our ESX 3.01 boxes...Dell 2950s. VMware tech support has suggested a format of one of the partitions and then an ESX re-install but I am extremely reluctant to do that...I can recreate the error by vmotioning a vm.
Take anything Dell says and discard it.
A quick check shows that my ESX native partitions ( the OS ) are all set to -1 ( in the UNIX world this usually means never ) --- Except the /vmimages, which is mounted on various VMs as they are created. used the tune2fs to set the count to 100