Hi, I'm new to VMWare though I've been investigating and making tests on the last 9 days. We've downloaded and installed ESXi in an old xSeries 226 dual Xeon 3.2 GHz, 6GB Ram 4 SCSI 146 Gb 10K HDs with ServeRAID 6i card (hardware RAID with battery)
We've created a VM from scratch and installed a Win 2003 SRV Professional and we have a problem disk speed is very slow, I'm talking about copying a 1.5 GB file from one folder to another one on the same Virtual Disk at 5 MB/s and from there to 1 MB/s or less while doing the same thing on any standart desktop with SATA does the job at 10-12 MB/s.
Of course the VM has the VM Tools installed, every firmware is updated, we've tried RAID 1E (a kind of RAID 1+0) and RAID 5 on the host, we've measured the disk speed by coping files, using IOMeter, measuring times to execute MS SQL Server queries on this virtualized server and our currently "in production" non virtualized server, etc.
Of course we've searched in the KB, the community and elsewhere but weren't able to find anything.
I've checked write back status on the Host's RAID: I'm using the cache
Lastly I've installed a XP VM on the same host and while running the other VM (the 2003) I've performed the easy test (file copy) it went over 30 MB/s!!! (I've checked that using VM Infrastructure Client Performance tab) but then it wen't down to 3.6 MB/s (when I tried to copy another file) The only other moment in which I had a good performance was while initiating hte discs in the IOMeter on the Server 2003, it also went up to 40MB/s....
Both VM are optimized for speed (using disc write cache) in windows, and this is driving me crazy. Any suggestion is welcomed, let me know if it's of any use to post any other data, I can test (or at least try to) anything you suggest even getting in the ESXi unsupported mode (ie working on the linux console)
I also tried setting them to have only one processor as I read there could be something...
There is anoher weird thing I've noticed, even if I turn off every VM on the host disks seem to have great activity (looking at the leds on them) of course looking at the performance tab on the Infrastructure Client shows no usage.
IBM xSeries 226
VMWare ESX Server 3i
6 GB Ram
4x 146 SCSI HD RAID 5 and RAID 1E
VM1:
Win 2003 Server Professional Edition w/VM Tools
4 GB Ram
50 GB Disk
VM2:
Win XP SP2 Professional Edition w/VM Tools
1 GB Ram
20 GB Disk
Thanks a lot in advance!
Bye,
Matias
ps: of course I will acknowledge (with points) any usefull answer, and sorry for my english...
Are you using console to copy or are you using rdp??
I have found the performance increases if I RDP to the computers rather then doing things at the console. Using RDP I am getting consistant 30 MB/s from both server 2003 aand xppro?
Is your managment lan and the vm lan on the same network card?
I've used both methods, console and remote desktop as well: both yielded the same figures (more or less)
I'm using only 1 NIC, so yes, both lans share the same card
Thanks for answering
Bye,
Matias
Not an expert on the HALs but I would try installing 2003 with one processor from the start.
Not sure what happens with performance when you use a mult-proc HAL and then remove one processor.
Ok, we'll try this today and let you know the results.
Thanks!
But.... The win XP was in fact installed from scrath with only one proc and has the same poor performance...
Also you are copying from where to ehrer if you are doing a copy from one vm disk on one physicle disk to another vm disk on the same physicle disk you can see some speed slowdown depending on the speed of the single physicle disk your using.
You're right, but I'm copying from the raid5 to the same raid5, this should of course be slower than copying from one disk (or set of disks) to another physical disk (or set of disks), but it can be half or one third of the speed, no 1/10 or 1/20 of the speed. Anyway I think we might have found the problem, it seems loike the array was building something (or the host was building something) and that was the cause of two sympthoms the disk lights that didn't go out and the slow speed. Is there anything a fresh VM ESXi install needs to build on the storage?
I have a very similar setup and a very similar problem. ESXi 3.5 running on a dual xeon supermicro board, 3.2Ghz, 12gb RAM, LSI raid card. Coping from IDE to sata raid is incredibly slow. Copying over network from VM to physical machine on 100 Full duplex network is incredibly slow. I know that the machine isn't exactly the newest or fastest, but it should be much faster than this. I only have 4 VMs (2 Windows) and they are just incredibly slow when transfering files.
Hi, after a hard day at the office, making thousands of tests, working
from dusk till dawn, the report... (who said server virtualization
couldn't use some drama...)
We're mainly disoriented on this issue, we've tried lots of things among which are:
We've
"checked the host was working ok with the raid copying with "cp" a
2.0GB file on the VM Host console (unsupported mode) (16.5 MB/s so
that's + or - 30 MB/s total disk IO bandwidth)
Then we made the
same test several times: we've copied a large (1.0GB) file on the guest
OS inside a VM using Total Commander and checked the speed. We've
copied the file from a disk to another folder on the same disk. Disks
are placed on a storage which is a RAID 5 with 4 SCSI 10K 146GB as
described earlier. We tried this:
- working with 1, 2, 3, 4 or 5 simultaneous VMs
- We played with processors, memory, and OS (2003 Server and XP)
- we played with the settings of the Bios of the VW, we tried to map what was configured in the BIOS of the Host
- Always before testing speed we took a look at the RAID and checked
there was no activity in the disks (from the RAID card, the VMWare or
whatever....)
- Playing with Bus or LSI
- Using the virtual Disk as Independent (always with persistent selected)
- played with disk sharing values but i think there's no problem there because i get slow speeds even with only 1 VM running
We still have to check:
- copying to a shared virtual disk
- letting the recently booted VM "rest" for a couple of minutes before doing the test
- Playing with the Hyperthreading settings (I smell something here.... not sure what)
- building 2 RAID 1 arrays and placing VMs on different disks and
copying from one array to the other) (which should be at least double
the speed is HDs are really the bottleneck here)
After that
we're out of ideas... and when that happens we'll have to switch to the
supersticious mode (hang garlic over the host, bring a priest to the
data center, etc...) we've read plenty of the technichal papers and
manuals, I've played with the console mode of the host but got no
results at all.I've discovered that my W2003 builds a big Swap file
upon start (in summary tab I can see 3.1 GB of memory use in the host
and 5.3 of guest OS memory usage)
Anyone has any suggestion?
we're willing to use VMWare (as I'm convinced this is an issue we're
having and the product itself is stable) but are kind of disappointed
on the results we're getting, I'm not sure wether this is because we're
running it on an old server, if it's because our inexpertise or only
that we're having very bad luck!
I'm about to the same point MatiasG....
Were not to far from an indian reservation so I was thinking of having a medicine man come and bless my Adaptec 2020ZCR raid controller
I also think that ESXi is a good product, but unless one of their developers sees this post I'm afraid my company is going to have to move on to another Virtualization Solution.....as 5MB/s disk performance is not something we can learn to deal with.
I feel a little better knowing that I'm not alone with this problem......
One thing to keep in mind is that ESXi doesn't cache any writes in memory so a battery backed write cache on the controller can make a significant difference.
We've managed to get things working. Mainly we dealt with memory and the way it was asigned to VM, let me re read your post tomorrow again and i let you know what changes i would do to your config. Apart from that if you want to fine tune, read about aligment, stripe size, etc
I'll write more tomorrow.
Bye
We've managed to get things working. Mainly we dealt with memory and the way it was asigned to VM, let me re read your post tomorrow again and i let you know what changes i would do to your config. Apart from that if you want to fine tune, read about aligment, stripe size, etc
I'll write more tomorrow.
Bye
Yes our controller has a battery backed cache and it (of course) has write
cache enabled on the controller... I think our problem was something
regarding how memory was assigned to the VMs, as soon as I can check this
I'll post the info.
Thanks!
Sledgezfx,
Can you please post GB of memory and GHz of your host and same thing of your
VM (assigned, reserved and limit) and pls indicate if you have selectedd
affinity with certain processors and how many processors in each VM. Thanks!
I think that changing that can improve a lot your HD performance. Then there
are several considerations in order to improve performance such as RAID 1+0
rather than 5, strip size, partition aligment on VM (not necessary if
created with VIC), and on windows (using diskpart.exe)... And of course you
MUST have vmware tools installed on guest Oss....
Bye
here is all my server stats
more in next post
continued....
Hopefully that gives you all the info you need.....please remember to look back at my previous post....I've been trying to solve this for so long that most people who make suggestions now are all suggesting stuff I've already tried.
Thanks for your help
SledgezFX
Hi, i took a look at what you posted, I also re-read your original post with all the specs. I need something else: the amount of memory assigned to each VM in it's settings (what you see when you right click a VM and choose "Edit Settings..."
I need that for the 3 VMs...
Thanks!
Anyone has any suggestion?
Firmware of the controller up to date? Try a different ESX install / build? Try a reinstall of ESX? Try holding a cross in one hand, lift your leg slightly, hold your breath, and try again? ![]()
