VMware Cloud Community
jpoling
Enthusiast
Enthusiast

Shutdown Issues?

I am trying to perform some maintenance on an ESX 3.01 host. . .when I initiate a shutdown. . .it gets to "Unmounting File Systems \[OK]" and seems to hang. . .it's been at that point for 45 + minutes. . .

Has anyone seen this? We ahve about 19 VMFS file systems if that matters. . .

Any help or insight on what to do next is greatly appreciated.

Jeff

0 Kudos
20 Replies
lholling
Expert
Expert

Hi There

Did you shutdown the VMs prior to going into maintenance mode. If you didn't then it may be trying to (unsuccessfully) shut them down before going into maintenance mode.

You might be able to use the VI Client and point it to the server and try and "help" it shutdown or get a ESX server console prompt and shut them down manually using vmware-cmd commands.

Leonard...

---- Don't forget if the answers help, award points
0 Kudos
hmartin
Enthusiast
Enthusiast

I'm having the same problem. I did shutdown all of the VMs before trying to restart ESX (via the 'shutdown now -h' command). However, I shutdown the guests using VirtualCenter, alternate-clicking each VM, and selecting Shutdown Guest. I then waited for all of them to show shut down in the interface and even opened the console on a couple VMs to confirm it.

One of the VMs has been at the 'unmounting file systems' step for about two hours and another for about 1 hour, 10 minutes. The local disks have been showing heavy activity the whole time, so hard powering off the server is not something I really want to do. Any thoughts are appreciated.

0 Kudos
lholling
Expert
Expert

Hi There

It probably wont work but have you tried "suspend"ing it?

vmware-cmd pathto.vmx file suspend?

It may get the VM to stop for you.

Leonard...

---- Don't forget if the answers help, award points
0 Kudos
Reedy2642
Contributor
Contributor

I am also having the same problem. I have put the host into maintenence mode and all the running VM's have VMotioned off using DRS. Waited about 30 mins to issue the reboot command as was doing maintenence. It just hangs on the 'unmounting file systems' when it is shutting down.

Hardware is 3 x IBM x3550, Equallogic PS100e iSCSI SAN, everything runing nice apart from this

Any help would be gratefully recieved

Tom

0 Kudos
jpoling
Enthusiast
Enthusiast

I never really resolved this. . .I attributed it to having a multi-node scalable system and simply shutdown the hosts.

All VMs were shutdown prior to entering maintenance mode and the host shutdown was attempted through virtual center

0 Kudos
kgouldsk
Contributor
Contributor

3.0.2-52542

I'm having the same problem, haven't experienced this on any of the other systems I admin, which number just short of 20. This is one of the few that are iSCSI, and I've noticed some of the iSCSI flakiness (sw iSCSI) on here that is referenced in some other threads. Related? As in other posts, when I scan for new volumes, it finds the volumes, but won't let me add storage (nothing shows up in the wizard) and doesn't seem to realize a filesystem is there if already formatted. Meanwhile, another iSCSI volume on the same storage array is up and working fine. I have 2 ESX servers and 2 storage servers, and neither would work until I rebooted the servers, now they're fine.

I wonder if anyone else having the iSCSI problem is having the shutdown problem? Any of you running iSCSI?

0 Kudos
Mike_P
Enthusiast
Enthusiast

We have 5 Dell PE1950s with an EqualLogic PS100E iSCSI SAN, running the latest 3.5 patched kernel. I'm not seeing iSCSI issues, but three out of the five hung at the Unmounting Filesystems stage when I patched them the other week (all VMs had been DRSed off). As the OK message had appeared on the same line as the Unmounting Filesystems, the umount appeared to have worked OK. On a machine which worked, the next line was Rebooting, which obviously vanished after a second or so as it did reboot... For the hung machines a manual reset did the trick - no connection from VC or commands on the machine itself will work as the system is to all intents shut down completely at that point.

0 Kudos
smoke455
Contributor
Contributor

just out of curiosity - has anyone tried Alt+F12 at the console to see if there are any error messages flying past? Everytime mine hangs on shutdown I can do this and see a scrolling screen of scsi warnings that its lost connection with the iscsi target and can't unmount it.

0 Kudos
REOScotte
Contributor
Contributor

I'm getting the same issue. And yes, Alt-F12 shows the screen full of SCSI warnings. Its pretty jittery, but as best I can tell, it's the same error over and over.

0 Kudos
Mike_P
Enthusiast
Enthusiast

Yep, hadn't tried F12 console until now, and mine is showing copious warning messages alternating over various CPUs (0, 4, 5, 6 and 7 - no sign of 1, 2 or 3) on a dual quad-core box.

0 Kudos
swiftangelus
Enthusiast
Enthusiast

hello all,

I can also confirm that there appears to be an issue after patching the server. We are using the iSCSI sw initiator and it was rebooting without a problem until I patched teh ESX 3.5 host with all of teh latest patches. I can see the errors after using Alt F12.

Has anyone fixed the issue? or logged a call with VMware support?

Thanks

0 Kudos
Xo1iN
Contributor
Contributor

Same issue here:

I migrated 4 VMs off, shutdown then migrated 2 more, put the host into Maintenance. The updgrade to 3.5 update 1 went great until the reboot step.

ALT+F12 showed a lot of information here's a screenshot. Lefthand is my SAN manufacturer incidently.

Hard powered off the HOST sice it was past the upgrade portion. I had a host do this when I went from 3.0 to 3.0.2 as well.

Update: jsut upgraded host 2 of 3. So far both 1 and 2 had this issue.

0 Kudos
rvi
Contributor
Contributor

Same issue here,

Put the host (ESX 3.5 U2) in maintenance and then pressed reboot

after 30 minutes it still didn't reboot

Had to hard power down the server to get it up and running again.

Trying server 2 in a few minutes (2nd host same problem)

Also using the Lefthand SAN on this side

0 Kudos
mcsenerd
Contributor
Contributor

We are also experiencing this issue as well. Only path to resolution for us was to hard reset the machines thus far. I must say that we've had nothing but storage issues from applying Update 3 and many of the subsequent patches this time around. (I even have a host that has been fully patched, including the update 3 LUN locking bug fix, that was locking several of our CX380 LUNs and causing widespread VM outages)

James S.

0 Kudos
mumford
Contributor
Contributor

So two years in the making and no resolution to this besides a hard shutdown? No response by a moderator and/or VMware employee? That is frustrating.

0 Kudos
AndreTheGiant
Immortal
Immortal

Ave you tried to update your ESX at least to 3.0.2?

Andre

**if you found this or any other answer useful please consider allocating points for helpful or correct answers

Andrew | http://about.me/amauro | http://vinfrastructure.it/ | @Andrea_Mauro
0 Kudos
mumford
Contributor
Contributor

I am actually on ESX 3.5.0

0 Kudos
AndreTheGiant
Immortal
Immortal

I am actually on ESX 3.5.0 U3.

Sorry, I was asking to jpoling that was using (in the first post) an old version.

Most of this shtutdown issue can also be related to a non correct ACPI support.

Have you tried to do a BIOS and motherboard firmware update?

Andre

**if you found this or any other answer useful please consider allocating points for helpful or correct answers

Andrew | http://about.me/amauro | http://vinfrastructure.it/ | @Andrea_Mauro
0 Kudos
heartbeat
Contributor
Contributor

I have the same problem just now.

I'm running VMware ESX Server, 3.5.0, 153875 (update3 and some patches)

IBM x3650, Dual Quad cores, 34GB ram with Dualport Qlogic FC HBA's. Storage is on a IBM DS4700 Storage

I've been in contact with VMware support about another problem that might just be related. I've been asked to change the SC memory allocation from 512 to 800. During the reboot of the first host (of 6) I get this situation.

The mail with the problem is allready sent, so If there is a solution I'll share it later on.

Cees

0 Kudos