VMware Communities
continuum
Immortal
Immortal

please fix suspend - feature

I had no problems in version 8 beta but since 8.0 the suspend feature gets worse with every new update.

Suspending a VM that has been idle for hours really should not take 15 minutes or more and cripple all other running VMs to a crawl.

Please consider to popup a warning with a text like this when the suspend button is used:

"Are you sure that you want to suspend the VM ?
A regular shutdown may be much faster and will not affect other running VMs"


2012-02-10T16:44:51.733+01:00| vcpu-0| I120: MainMem: Write full memory image (SF=0 L=0 SZ=1 RM=0 U=0 TM=0).
2012-02-10T16:44:51.764+01:00| vcpu-0| I120: Progress 0% (none)
2012-02-10T16:44:52.280+01:00| vcpu-0| I120: Progress 1% (none)
2012-02-10T16:44:52.546+01:00| vcpu-0| I120: Progress 2% (none)
2012-02-10T16:44:52.593+01:00| vcpu-0| I120: Progress 3% (none)
2012-02-10T16:44:52.608+01:00| vcpu-0| I120: Progress 4% (none)
2012-02-10T16:44:52.624+01:00| vcpu-0| I120: Progress 5% (none)
2012-02-10T16:44:52.639+01:00| vcpu-0| I120: Progress 6% (none)
2012-02-10T16:44:52.655+01:00| vcpu-0| I120: Progress 7% (none)
2012-02-10T16:44:52.671+01:00| vcpu-0| I120: Progress 8% (none)
2012-02-10T16:44:52.686+01:00| vcpu-0| I120: Progress 9% (none)
2012-02-10T16:44:52.702+01:00| vcpu-0| I120: Progress 10% (none)
2012-02-10T16:44:52.733+01:00| vcpu-0| I120: Progress 11% (none)
2012-02-10T16:44:52.764+01:00| vcpu-0| I120: Progress 12% (none)
2012-02-10T16:44:52.764+01:00| vcpu-0| I120: Progress 13% (none)
2012-02-10T16:44:52.796+01:00| vcpu-0| I120: Progress 14% (none)
2012-02-10T16:44:52.827+01:00| vcpu-0| I120: Progress 15% (none)
2012-02-10T16:44:52.843+01:00| vcpu-0| I120: Progress 16% (none)
2012-02-10T16:44:52.858+01:00| vcpu-0| I120: Progress 17% (none)
2012-02-10T16:44:52.874+01:00| vcpu-0| I120: Progress 18% (none)
2012-02-10T16:44:52.952+01:00| vcpu-0| I120: Progress 19% (none)
2012-02-10T16:44:54.577+01:00| vcpu-0| I120: Progress 20% (none)
2012-02-10T16:44:55.030+01:00| vcpu-0| I120: Progress 21% (none)
2012-02-10T16:44:55.796+01:00| vcpu-0| I120: Progress 22% (none)
2012-02-10T16:44:55.874+01:00| vcpu-0| I120: Progress 23% (none)
2012-02-10T16:44:56.999+01:00| vcpu-0| I120: Progress 24% (none)
2012-02-10T16:44:57.561+01:00| vcpu-0| I120: Progress 25% (none)
2012-02-10T16:44:59.171+01:00| vcpu-0| I120: Progress 26% (none)
2012-02-10T16:44:59.905+01:00| vcpu-0| I120: Progress 27% (none)
2012-02-10T16:45:02.764+01:00| vcpu-0| I120: Progress 28% (none)
2012-02-10T16:45:04.514+01:00| vcpu-0| I120: Progress 29% (none)
2012-02-10T16:45:09.296+01:00| vcpu-0| I120: Progress 30% (none)
2012-02-10T16:45:16.858+01:00| vcpu-0| I120: Progress 31% (none)
2012-02-10T16:45:20.030+01:00| vcpu-0| I120: Progress 32% (none)
2012-02-10T16:45:22.046+01:00| vcpu-0| I120: Progress 33% (none)
2012-02-10T16:45:33.577+01:00| vcpu-0| I120: Progress 34% (none)
2012-02-10T16:46:32.030+01:00| vcpu-0| I120: Progress 35% (none)
2012-02-10T16:47:24.843+01:00| vcpu-0| I120: Progress 36% (none)
2012-02-10T16:48:16.108+01:00| vcpu-0| I120: Progress 37% (none)
2012-02-10T16:49:11.593+01:00| vcpu-0| I120: Progress 38% (none)
2012-02-10T16:50:20.014+01:00| vcpu-0| I120: Progress 39% (none)
2012-02-10T16:51:43.577+01:00| vcpu-0| I120: Progress 40% (none)
2012-02-10T16:52:11.749+01:00| vcpu-0| I120: Progress 41% (none)
2012-02-10T16:52:57.358+01:00| vcpu-0| I120: Progress 42% (none)
2012-02-10T16:53:24.811+01:00| vcpu-0| I120: Progress 43% (none)
2012-02-10T16:53:30.499+01:00| vcpu-0| I120: Progress 44% (none)
2012-02-10T16:53:36.983+01:00| vcpu-0| I120: Progress 45% (none)
2012-02-10T16:53:41.296+01:00| vcpu-0| I120: Progress 46% (none)
2012-02-10T16:53:45.858+01:00| vcpu-0| I120: Progress 47% (none)
2012-02-10T16:53:55.202+01:00| vcpu-0| I120: Progress 48% (none)
2012-02-10T16:54:12.530+01:00| vcpu-0| I120: Progress 49% (none)
2012-02-10T16:54:49.608+01:00| vcpu-0| I120: Progress 50% (none)
2012-02-10T16:55:29.436+01:00| vcpu-0| I120: Progress 51% (none)
2012-02-10T16:56:14.280+01:00| vcpu-0| I120: Progress 52% (none)
2012-02-10T16:57:07.514+01:00| vcpu-0| I120: Progress 53% (none)
2012-02-10T16:57:57.061+01:00| vcpu-0| I120: Progress 54% (none)
2012-02-10T16:58:42.139+01:00| vcpu-0| I120: Progress 55% (none)
2012-02-10T16:59:19.499+01:00| vcpu-0| I120: Progress 56% (none)
2012-02-10T16:59:51.718+01:00| vcpu-0| I120: Progress 57% (none)
2012-02-10T17:00:25.233+01:00| vcpu-0| I120: Progress 58% (none)
2012-02-10T17:00:58.124+01:00| vcpu-0| I120: Progress 59% (none)
2012-02-10T17:01:29.139+01:00| vcpu-0| I120: Progress 60% (none)
2012-02-10T17:02:03.561+01:00| vcpu-0| I120: Progress 61% (none)
2012-02-10T17:02:32.436+01:00| vcpu-0| I120: Progress 62% (none)
2012-02-10T17:03:06.171+01:00| vcpu-0| I120: Progress 63% (none)
2012-02-10T17:03:40.077+01:00| vcpu-0| I120: Progress 64% (none)
2012-02-10T17:04:32.343+01:00| vcpu-0| I120: Progress 65% (none)
2012-02-10T17:05:00.296+01:00| vcpu-0| I120: Progress 66% (none)
2012-02-10T17:05:32.874+01:00| vcpu-0| I120: Progress 67% (none)
2012-02-10T17:06:22.718+01:00| vcpu-0| I120: Progress 68% (none)
2012-02-10T17:07:03.546+01:00| vcpu-0| I120: Progress 69% (none)
2012-02-10T17:07:41.905+01:00| vcpu-0| I120: Progress 70% (none)
2012-02-10T17:08:30.108+01:00| vcpu-0| I120: Progress 71% (none)
2012-02-10T17:09:09.843+01:00| vcpu-0| I120: Progress 72% (none)
2012-02-10T17:09:49.702+01:00| vcpu-0| I120: Progress 73% (none)
2012-02-10T17:10:28.233+01:00| vcpu-0| I120: Progress 74% (none)
2012-02-10T17:11:08.389+01:00| vcpu-0| I120: Progress 75% (none)
2012-02-10T17:11:48.546+01:00| vcpu-0| I120: Progress 76% (none)
2012-02-10T17:12:22.671+01:00| vcpu-0| I120: Progress 77% (none)
2012-02-10T17:12:54.468+01:00| vcpu-0| I120: Progress 78% (none)
2012-02-10T17:13:28.889+01:00| vcpu-0| I120: Progress 79% (none)
2012-02-10T17:14:04.796+01:00| vcpu-0| I120: Progress 80% (none)
2012-02-10T17:14:32.968+01:00| vcpu-0| I120: Progress 81% (none)
2012-02-10T17:15:08.327+01:00| vcpu-0| I120: Progress 82% (none)
2012-02-10T17:15:43.405+01:00| vcpu-0| I120: Progress 83% (none)
2012-02-10T17:16:17.593+01:00| vcpu-0| I120: Progress 84% (none)
2012-02-10T17:16:48.983+01:00| vcpu-0| I120: Progress 85% (none)
2012-02-10T17:17:22.749+01:00| vcpu-0| I120: Progress 86% (none)
2012-02-10T17:17:53.389+01:00| vcpu-0| I120: Progress 87% (none)
2012-02-10T17:18:28.468+01:00| vcpu-0| I120: Progress 88% (none)
2012-02-10T17:19:08.983+01:00| vcpu-0| I120: Progress 89% (none)
2012-02-10T17:19:39.764+01:00| vcpu-0| I120: Progress 90% (none)
2012-02-10T17:20:15.546+01:00| vcpu-0| I120: Progress 91% (none)
2012-02-10T17:20:48.296+01:00| vcpu-0| I120: Progress 92% (none)
2012-02-10T17:21:20.061+01:00| vcpu-0| I120: Progress 93% (none)
2012-02-10T17:21:56.561+01:00| vcpu-0| I120: Progress 94% (none)
2012-02-10T17:22:33.218+01:00| vcpu-0| I120: Progress 95% (none)
2012-02-10T17:22:56.030+01:00| vcpu-0| I120: Progress 96% (none)
2012-02-10T17:22:56.358+01:00| vcpu-0| I120: Progress 97% (none)
2012-02-10T17:22:56.358+01:00| vcpu-0| I120: Progress 98% (none)
2012-02-10T17:23:30.139+01:00| vcpu-0| I120: Progress 99% (none)
2012-02-10T17:24:11.561+01:00| vcpu-0| I120: Progress 100% (none)
2012-02-10T17:24:11.874+01:00| vcpu-0| I120: Progress 101% (none)


________________________________________________
Do you need support with a VMFS recovery problem ? - send a message via skype "sanbarrow"
I do not support Workstation 16 at this time ...

0 Kudos
18 Replies
continuum
Immortal
Immortal

I am not the only one seeing this problem.
For some users suspend may take upto a full hour !!!

http://communities.vmware.com/thread/345031?tstart=0


________________________________________________
Do you need support with a VMFS recovery problem ? - send a message via skype "sanbarrow"
I do not support Workstation 16 at this time ...

0 Kudos
louyo
Virtuoso
Virtuoso

Odly enough, I don't see this with a Linux host (not that it is fast by any means). It just took about 1.5 minutes to suspend a SBS 2011 server with 5GB memory and a 200GB hard drive. Suspend does really slow response times in other VMs though.

It seems to me that, prior to the last upgrade (8.0.2), it would flood one processor (host) at 100% but would move from processor to another. Now it does not take any processors to over about 60%. I show 8 processors, system has 2 Quad Cores. I never (well, almost never) assign more than 1 processor to a VM.

Maybe they are working on this? Seems a little better for me. Ubuntu 11.04 on the host. VM hard drive is a striped RAID, separate from host OS, so is reasonably fast (hdparm shows 194 MB/sec).

Lou,

0 Kudos
continuum
Immortal
Immortal

the ultra-slow suspend times don't occur every time a VM is supended - I guess I see the problem in 7 out of 10 attempts

only pattern I see so far:
the more vmdks a VM has,
the larger the vmdks are,
the more VMs are running at the same time
---------------------------------------------------------------
the more probable is a ultra-long suspend-time


________________________________________________
Do you need support with a VMFS recovery problem ? - send a message via skype "sanbarrow"
I do not support Workstation 16 at this time ...

0 Kudos
jessepool
VMware Employee
VMware Employee

Hi continuum,

One problem is that you have mainmem.useNamedFile = false set in your config. This will definitely cause suspend/resume to be slow. That said, it would be nice to not have a horrible suspend/resume experience when this config option is set.

Can you try setting mainMem.writeZeros = "TRUE" in the .vmx config file? This will make sure we don't use a sparse file. I'm wondering if that's causing more problems than it's worth.

btw, any updates on,

  http://communities.vmware.com/message/1844691

?

Thanks.

0 Kudos
continuum
Immortal
Immortal

Hi Jesse

I am experimenting with varios combinatios of the parameters

prefvmx.useRecommendedLockedMemSize =
prefvmx.minVmMemPct =    
mainmem.useNamedFile =
mainMem.writeZeros =

this one
mainMem.writeZeros = "TRUE"

seems to shorten the suspend times but I dont have consistet results yet


________________________________________________
Do you need support with a VMFS recovery problem ? - send a message via skype "sanbarrow"
I do not support Workstation 16 at this time ...

0 Kudos
tcw01
Contributor
Contributor

I've been having the same problem with BOTH Suspend and Shutdown. I'm running VMWorkstation 8.0.2 build-591240. Times vary from 30 minutes to 1.5 hours!!

I am running only one VM on a laptop with 16Gb memory and i7 processor!!

One thing I have noticed from the logfiles is that it processes a large part of the Suspend/Shutdown very quickly and then goes into a state of limbo for the majority of the remaining time, then resumes and finishes..

eg (cf vmware.log attached)

Shutdown starts at 16:47:22 aand runs till 16:56:08:505Z and then resumes at 17:27:50:329Z and runs till 17:27:50:6232Z when it finishes

2012-02-19T16:56:07.466Z| vmx| I120: Closing disk scsi0:1
2012-02-19T16:56:08.248Z| vmx| I120: scsi0:0: numIOs = 64296 numMergedIOs = 3422 numSplitIOs = 3 ( 0.1%)
2012-02-19T16:56:08.248Z| vmx| I120: Closing disk scsi0:0
2012-02-19T16:56:08.505Z| vmx| I120: AIOWIN32C: asyncOps=65631 syncOps=296 bufSize=280Kb fixedOps=1681 sgOps=62545 sgOn=1
2012-02-19T16:56:08.505Z| aioCompletion| I120: AIO thread processed 65631 completions
2012-02-19T16:56:08.505Z| vmx| I120: AIOWIN32: asyncOps=0 syncOps=0 bufSize=0Kb delayed=0 fixed=0 sgOp=0 sgOn=1
2012-02-19T17:27:50.329Z| vmx| I120: WORKER: asyncOps=2604 maxActiveOps=2 maxPending=0 maxCompleted=1
2012-02-19T17:27:50.331Z| WinNotifyThread| I120: WinNotify thread exiting
2012-02-19T17:27:50.590Z| vmx| I120: CheckpointDeleteOnDiskState: Deleted checkpoint file 'E:\Vmware\Virtual Machines\Windows 7 x64\Windows 7 x64-7cf4a79d.vmss'.

(cf vmware-2.log attached)

2012-02-18T19:02:15.036Z| vmx| I120: Closing disk scsi0:1
2012-02-18T19:02:15.315Z| vmx| I120: scsi0:0: numIOs = 235373 numMergedIOs = 12176 numSplitIOs = 2 ( 0.0%)
2012-02-18T19:02:15.315Z| vmx| I120: Closing disk scsi0:0
2012-02-18T19:02:15.717Z| vmx| I120: AIOWIN32C: asyncOps=221144 syncOps=296 bufSize=256Kb fixedOps=2089 sgOps=216087 sgOn=1
2012-02-18T19:02:15.717Z| aioCompletion| I120: AIO thread processed 221144 completions
2012-02-18T19:02:15.717Z| vmx| I120: AIOWIN32: asyncOps=0 syncOps=0 bufSize=0Kb delayed=0 fixed=0 sgOp=0 sgOn=1
2012-02-18T19:38:05.364Z| vmx| I120: WORKER: asyncOps=10476 maxActiveOps=2 maxPending=0 maxCompleted=1
2012-02-18T19:38:06.399Z| WinNotifyThread| I120: WinNotify thread exiting

0 Kudos
continuum
Immortal
Immortal


2012-02-19T16:50:07.138Z| vmx| I120: scsi0:0: Command READ(10) took 1.481 seconds (ok)
2012-02-19T16:50:07.138Z| vmx| I120: scsi0:0: Command READ(10) took 1.481 seconds (ok)
2012-02-19T16:50:07.138Z| vmx| I120: scsi0:0: Command READ(10) took 1.480 seconds (ok)
2012-02-19T16:50:07.138Z| vmx| I120: scsi0:0: Command READ(10) took 1.480 seconds (ok)

....

you should shrink your vmdks


________________________________________________
Do you need support with a VMFS recovery problem ? - send a message via skype "sanbarrow"
I do not support Workstation 16 at this time ...

0 Kudos
tcw01
Contributor
Contributor

<<you should shrink your vmdks>>

how will  that help?

I have 2 vmdks.

1. Windows 7 x64.vmdk is 39.8GB

2.Windows 7 x64-0.vmdk is 29.9GB

My point is, what ia happening between these two log entries (vmware-2.log) lines 1361 & 1362:

2012-02-18T19:02:15.717Z| vmx| I120: AIOWIN32: asyncOps=0 syncOps=0 bufSize=0Kb delayed=0 fixed=0 sgOp=0 sgOn=1
2012-02-18T19:38:05.364Z| vmx| I120: WORKER: asyncOps=10476 maxActiveOps=2 maxPending=0 maxCompleted=1

36 minutes of nothing!!

and (vmware.log) lines 1754 & 1755

2012-02-19T16:56:08.505Z| vmx| I120: AIOWIN32: asyncOps=0 syncOps=0 bufSize=0Kb delayed=0 fixed=0 sgOp=0 sgOn=1
2012-02-19T17:27:50.329Z| vmx| I120: WORKER: asyncOps=2604 maxActiveOps=2 maxPending=0 maxCompleted=1

31 minutes of nothing!!

Shutdown started at rougly 16:56 and eventually finished at17:27

from another VM log (vmware.log attached):for Shutdown of a VM (which was left running overnight to close down)

this VM has 2 vmdks - 37.1GB & 4.50GB (hardly enormous!!)

2012-02-17T23:33:26.788Z| vmx| I120: AIOWIN32: asyncOps=0 syncOps=0 bufSize=0Kb delayed=0 fixed=0 sgOp=0 sgOn=1
2012-02-18T01:01:18.840Z| vmx| I120: WORKER: asyncOps=64061 maxActiveOps=2 maxPending=0 maxCompleted=1

88 minutes of nothing!!

0 Kudos
continuum
Immortal
Immortal

> how will  that help?

suspend and shutdowen times depend on the overall performance of the vmdk

your vmdk looks like it has not been shrinked for a long time - if ever


________________________________________________
Do you need support with a VMFS recovery problem ? - send a message via skype "sanbarrow"
I do not support Workstation 16 at this time ...

0 Kudos
tcw01
Contributor
Contributor

I ran the VMware Standalone Converter  on all my VMs two weeks ago as most of them had become very bloated with snapshots. This seemed to have the effect of consolidating (shrinking?) the number of vmdks and other files

Attached are snapshots of directories 'H:\Vmware\Virtual Machines\Windows 7 x64' before (snapshot_2 to snapshot_4) and 'E:\Vmware\Virtual Machines\Windows 7 x64' after (snapshot_1) conversion.

I can appreciate that using H:\Vmware\Virtual Machines\Windows 7 x64 to suspend or shutdown would take considerably longer to process.

VM Windows 7 x64 has 2 drives: Drive C 39.9Gb (1.20GB spare) and Drive D 29.9GB (3.27GB spare)

How do I shrink them any more? Is VMware Standalone Converter the right way to shrink vmdks?

Would it be worth reverting to an earlier version of VMWorkstation?

I still can't get my head around the reason for the long delays in shutdown (36, 31 & 88 minutes) between the AIOWIN32 and WORKER log entries

0 Kudos
continuum
Immortal
Immortal

"shrink" is a function of the vmware-tools
you should use it regularly on VMs that are heavily used

you should also use it if you see log entries like te ones I posted above


________________________________________________
Do you need support with a VMFS recovery problem ? - send a message via skype "sanbarrow"
I do not support Workstation 16 at this time ...

0 Kudos
tcw01
Contributor
Contributor

I have using n't tried to shrink using vmware tools.

Attached is a log of a NEW VM installation annd shutdown and snapshot of new VM directory

Shutdown started about 19:23 and finished at about 19:52 - 29 minutes!!

0 Kudos
continuum
Immortal
Immortal

try the following line in the vmx or config.ini

mainMem.writeZeros = "TRUE"

that line seems to help a little bit

just curious ...
use the defrag tools of your host and find out if the vmem file is fragmented into thousands of fragments


________________________________________________
Do you need support with a VMFS recovery problem ? - send a message via skype "sanbarrow"
I do not support Workstation 16 at this time ...

0 Kudos
jessepool
VMware Employee
VMware Employee

Based on this log snippet,

2012-02-17T23:33:26.549Z| vmx| I120: Closing disk scsi0:0
2012-02-17T23:33:26.788Z| vmx| I120: AIOWIN32C: asyncOps=324983 syncOps=254 bufSize=384Kb fixedOps=7688 sgOps=284452 sgOn=1
2012-02-17T23:33:26.788Z| aioCompletion| I120: AIO thread processed 324983 completions
2012-02-17T23:33:26.788Z| vmx| I120: AIOWIN32: asyncOps=0 syncOps=0 bufSize=0Kb delayed=0 fixed=0 sgOp=0 sgOn=1
2012-02-18T01:01:18.840Z| vmx| I120: WORKER: asyncOps=64061 maxActiveOps=2 maxPending=0 maxCompleted=1
2012-02-18T01:01:18.840Z| WinNotifyThread| I120: WinNotify thread exiting

We're either slow to cleanup AIOWIN32 or WORKER. It's impossible to tell which one from the log though. After we print the line starting with AIOWIN32, we close some handles and exit threads that are related to IO. The WORKER line is doing less: we're cleanup and waiting for the WinNotifyThread thread.

Since you have other logs indicating that IO is slow, I suspect that closing handles and cleaning up our asynch IO code is taking a long time. Can you confirm that your host has plenty of free disk space and that the vmdk files are not fragmented? Also delete the existing snapshot so that we can remove that variable. I'll open a bug report and attach your logs.

0 Kudos
tcw01
Contributor
Contributor

<<use the defrag tools of your host and find out if the vmem file is fragmented into thousands of fragments>>

I can't see any vmem files in any of the VM directories.

<<and that the vmdk files are not fragmented?>>

How cam I do this? I can't think of any utility that will do this at a file level. Maybe a Sysinternals one?

<<Can you confirm that your host has plenty of free disk space>>

Drive E (where all the VM directories are) has 614GB free

0 Kudos
continuum
Immortal
Immortal

just looked at your screenshots again - do you use compressed NTFS ?
dont do that !!!


________________________________________________
Do you need support with a VMFS recovery problem ? - send a message via skype "sanbarrow"
I do not support Workstation 16 at this time ...

0 Kudos
tcw01
Contributor
Contributor

<<just looked at your screenshots again - do you use compressed NTFS ?
dont do that !!!>>

No!!

0 Kudos
EdP2
Enthusiast
Enthusiast

A rhetorical question.

If 'Shrink' is so important --- why is it so darned difficult to do?

Surely it should be better automated than it is. To shrink a Ubuntu guest requires the following steps:

a) Removal of any snapshots, which can of itself be time consuming.

b) Launch the guest, search around (and not find toolbox), eventually find that vmware-toolbox needs to be launched in root mode and needs a 'sudo su' incantation.

c) Prepare for shrinking --- what this is I've no idea but it results in a series of dire warnings about running out of disk space, and takes forever.

d) Although you have 'prepared for shrinking' it then asks do you actually want to shrink the disk! Woe betide if you give the wrong answer as you will have to do the prepare for shrink all over again.

e) Hours later it completes.

Surely this should all be done at a host level and require zero user intervention once launched and agreed that snapshots will be zapped. In my view this is something for an overnight run from the Host that can be scheduled on some sort of regular basis. As it stands, this routine is not fit for purpose.

0 Kudos