Hello everyone,
I currently am runnig VmWare Server 1.0 on a few Windows hosts with multiple BSD/Linux and Windows guests underneath them. We have exceeded our current hardware limitations and are starting to notice considerable lag on our guest OSs. I'm looking to buy 2 new servers and 1 NAS Server. We dont have a huge budget so I was looking to a local vendor to build 2 dual processor quad core servers with 8 gigs of RAM and 2 small hard drives on a RAID 1 and using the NAS for all our storage.
Should we use iSCSI to connect the two processing servers to the NAS? Or a direct connection to the server itself? Does anyone recommend another server for our servers, we are not completely set on this particular scheme.
Any advice is appreciated.
Are you keeping with VMware Server or moving to ESX?
Tom Howarth
VMware Communities User Moderator
Are you planning to use all the features such as HA, DRS and VMotion and VCB as well? If you need these features, you required to have a shared LUN which neither by iSCSI/NFS/FC. You can use cheap iSCSI solution for this neither using SANmelody, Xtravirt Virtual Server (free) or Lefthand's Network VSA products check them out for more details. If this is for small shop, you can also use OpenFiler or FreeNAS as your SAN storage for low I/O usage. How many VMs are you planning to virtualized and make sure all designs are redundant as much as possible both ESX hosts, HBAs multipath and management networks as well.
Technically, you can have 8 VMs per core so with quad core (8 x 4 = 32 VMs) but you only have 8 GB ram that would be wasted CPUs so if you can crank it up with 16GB for each host and you don't worry about future expension and will run great for 16VMs for single host if dedicated 1GB each. SC/vKernel uses 1GB of RAM already so on average you can run about 8-14 VMs per ESX host with 16GB. If you have high speed LAN networks than using NFS is also a good choice but performance might be a bit lower.
If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!!!
Regards,
Stefan Nguyen
iGeek Systems Inc.
VMware, Citrix, Microsoft Consultant
Hello,
For iSCSI or NFS NetApp seems to have the best devices. However, you may want even more bandwidth and currently unless you are using 10G cards in your server, SAN provides the highest bandwidth (up to 8GBs now).
Best regards,
Edward L. Haletky
VMware Communities User Moderator
====
Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education. CIO Virtualization Blog: http://www.cio.com/blog/index/topic/168354, As well as the Virtualization Wiki at http://www.astroarch.com/wiki/index.php/Virtualization
I'm planning on moving to ESX server. We are currently running 4 FreeBSD, 4 Windows, and 2 Linux (Zimbra) servers on our VmWare server configuration and I think we're going to be outgrowing it, if not already, in the near future. For the time being we will be using ESX in it's most simple setup, no HA, DRS, vMotion. A NetAPP for now is going to be out of our budget, so I'm thinking of just having a local vendor build a hardware compatible system. Now my question is, would I run VmWare on the NAS too for VMFS?
Thanks for the advice
If that's the case than you just need compatible ESX 3.5 hardwares like Dell PE 2950 or IBM x3650 or HP servers would do it. Make sure you have enough RAM and enough local disk storage as well so using RAID 5 with all disks drives as possible 5 + 1 hot spare that would be enough. If you want to use existing local disk space using SAN Melody or Lefthand's VSA or even Xtravirt VS is good.
If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!!!
Regards,
Stefan Nguyen
iGeek Systems Inc.
VMware, Citrix, Microsoft Consultant