I'm using VMware Server 1.0.5 on a Linux machine running Slamd64 (A 64-bit port of Slackware). I have the same setup at home and it works fine on an AMD dual-core box (using VMware Server 1.0.4), but at work where the problem is, the machine is an IBM X3400, quad-core Xeon E5410 2.33GHz, 12MB Cache, 10GB RAM, 4x 146GB disks @ 15k RPM set on RAID5 using Adaptect RAID card.
After installing VMware Server, I proceeded with installing a VM running Windows 2003 Standard Edition (32-bit) with 2GB RAM & 80GB disk without pre-allocation. It took AGES to setup, but that's not the problem!
After the VM runs for about 5-10 minutes idle, with zero applications installed and not even an update being downloaded, the disks go 100% busy with about 400kB-1MB being written, yet the filesystem size isn't changing! CPU & memory utilizations are very minimal. This disk activity drives the whole machine to a ground halt! Stopping the VM takes about half an hour, if I'm lucky!
I suspected the aacraid driver for the RAID card, so ran this test: # time dd if=/dev/zero of=./32GB bs=1024k count=32768
The result was: In 110 seconds, it wrote 32GB ... that's about 300MB/s !!! and the system was responsive during this test.
I was going on for days looking at kernel changelogs to see what could have changed affecting the disks, but I couldn't find anything.
I ran "nmon" tool during times where the system was non-responsive and logged all to a file.
I have attached a zip file with the following:
dmesg, lspci, lsusb, lsscsi, df, free, lsof (during unresponsive times), messages, syslog, debug, nmon log, /var/log/vmware & vmare-mui, vmware*.log and the vmx file.
I hope someone can shed a light on what's going on here!
Thank you in advance
logs.tar.gz 211.4 K