This might be a question you've heard too often, apologies if so, I'm believe our situation isn't standard).
We have about 30 (windows XP) desktop PCs of various vintages and 6 or so servers (linux/MS). About 25% of the desktop machines run a standalone installed program which requires a high spec XP machine with very fast i/o. A recent change in the software requires at least 2GB RAM per machine (a database is read into memory). These machines are not constantly running this software, in fact it's probably only a very small percentage of time. The rest of the time is office tasks and programming (perl mainly).
I think (but don't know) that we could switch to a VMWare server and use standard office spec workstations instead. The server would need to have very fast i/o and a suitable number of cores. (We currently use an AMD X2 4400+ with 2 WD raptor HDDs on a caching RAID controller) to provide adequate performance on a workstation.
My question is really: Is this suitable task for VMWare and if so, any software/hardware suggestions would be very welcome.
So what exactly are you proposing to virtualise? The 6 servers and those desktops using this particular i/o intensive program (and then access those virtual desktops via RDP)?
When you say "a VMWare server", do you mean the free server product or are you including ESX in your thoughts?
Some more information would be useful such as some metrics from those desktops (e.g. disk i/o figures).
Thanks for your questions, I will try to provide more useful info:
Currently we plan to virtualize the workstations only. The 6 servers might follow if things work well probably using separate hardware. We will use whatever product does the job, I'm assuming we will need the standard version of ESX server, I'm not sure if enterprise offers any advantages to us.
i/o-wise, we tried to provide what were the fastest available at the time (WD raptors are 10,000rpm SATA drives) attached to a RAID card with 256mb cache, not for the RAID features but for the CACHING. Things have changed somewhat and there's better alternatives, SAS maybe?. It is important that workstations are not hobbled by poor i/o. Each VM would need at least 2gb RAM, probably happier with 4gb. Last time I asked the software did not make use of multiple processors but it would be great to have multiple processors appear as a single 'super' processor.
The new version of the software we use reads a database into memory and then accesses that so the i/o speed issue is seen as a achilles heel by the developer (too) and might be less of as issue if we can allocate RAM to a VM on the fly??
So we are looking at whether a VM server can do better than high powered workstations. The fact that we can create 'test' and 'development' versions is very useful to us. Currently each workstation is a it's own 'cluster' in the derogatory sense.
Currently, I'm not even sure ESX server with remote desktop clients is feasible so even a Yay or Nay in that respect will be helpful.
The Enterprise licence is of no use over the Standard one, if you have only 1 host.
As to whether you can use ESX for this application, I would say that there does not seem to be any reason why not but the server hardware you might have to throw at it to cope with the workload might be more expensive than you are budgeting for.
If the database being read into memory is the same across the virtual workstations, then ESX should get some benefit from its own caching algorithms., thereby reducing the load on its disk subsystem.
And once the db is in the virtual workstations' memory spaces, access to it should fly, on the assumption that (a) the host has enough RAM and (b) the VMs are configured appropriately - for example, the minimum RAM parameter could be used and "ballooning" could be turned off. This will ensure that the virtual workstations always have the db in immediately accessible RAM and that the host has not paged any memory pages out to disk.