Oracle11g
Contributor
Contributor

Factors to look out when setting up a new server for ESXi

Hi I just get in contact with VMWare lately and still hoping to learn more.

I would like to know what are the factors I would need to consider when setting up a single host server to cater for 10 Windows XP VM users? How am I going to decide on the CPU speed, core, RAM, hard disk space?

0 Kudos
9 Replies
rpcblast
Contributor
Contributor

that depends a lot on what kind of applications those users will be using. Also, are you going to be running any server VMs as well or just those 10 xp VMs?

I had for a while running a home /home office system running 25 server VMs(mostly server 2003), and I had just one VM server and one storage server, all whitebox(in production i dont recommend this but just to give you some frame of reference on specs)

the ESXI server was a dual quad core opty(2350) 2.0 system with 32gb ram. The storage box was iscsi(openfiler) with an LSI 84016E raid controller, and 3 arrays(4 x 150gb 10k raptors, 4x500gb SATA, 4 x 1tb SATA). Some of the VMs I was running included cisco call manager, 3 exchange boxes(2003, 2007, 2010), few sql and web servers, domain controllers, citrix, etc. Now, I wasnt servicing hardly any users, but just the nature of these applications does place a load on the server.

Also, are you lookking to be able to scale much beyond those 10 users? Are you possibly looking at VDI?

0 Kudos
golddiggie
Champion
Champion

Things you'll need to factor on include how much RAM will each VM, or View configuration, use, how much storage is going to be allocated (and how), how many vCPU's you're planning to provide the VM's/View systems, etc...

Personally, I'd go with dual socket Xeon powered servers (at least the 5500 series if not the 5600 series) that have the better bandwidth (such as the 5520 processors and above).

A baseline server, in my experience, is something akin to (if not exactly) a Dell R710 with dual E5520 Xeon's, 24GB RAM (with room to grow into more memory), two 73GB SAS hard drives (mirrored for ESX to reside upon, only) and at least one dual port Intel Gb NIC on top of the onboard quad Gb NIC that it comes with (make sure you pick to enable TOE for iSCSI at the build time). Two dual ports, or a quad port Gb would be a better choice right out of the gate. For storage, I'd go with something along the lines of the EqualLogic PS6000X or PS6000XV SAN's. If the budget allows, get the PS6000XV since it will contain 15k RPM SAS drives. You can start out small on the SAN, since it's to your advantage to stack more of the arrays together (gaining performance enhancements each time you add another array to the same group). I would also obtain a dedicated, fully managed (with full CLI) ProCurve Gb (24 or 48 port) switch to run all the iSCSI/vMotion/HA traffic through. Even though you're going with just one host now, plan the build around adding more servers later. The 2900 series is an excellent choice, as are the 2510G models (2900's offer 10Gb interconnects which the 2510G's do not).

VMware VCP4

Consider awarding points for "helpful" and/or "correct" answers.

Hosted Systems Engineer IV (VMware environment)
Brewing beer again!
0 Kudos
rpcblast
Contributor
Contributor

you can also use cisco switches ^^

btw with the premium(i think) version of view you get composer, which lets you use linked clones. which can save space.

0 Kudos
golddiggie
Champion
Champion

Sure, you could live on Cisco island... Although usually the premium you pay for that little name doesn't really make it worth it. Unless you're already heavily into Cicso gear and get some kind of discount. I would stay clear of Dell network switches. You get a lifetime warranty on the HP ProCurve switches (Cisco products too it appears), not so with Dell (you purchase x year support when you get the switch, max is 5 years).

I'm a firm believer in getting the right equipment to do the job. I do like Dell servers (especially the R710 model) as well as their workstation class desktops/towers and laptops. Not a fan of HP desktops or laptops (no opinion of their current workstation class offerings), not much opinion of the servers, but do like their switches (a lot).

Years ago, I was an AMD processor fan. Then again I was also a fan of apple systems... Then I stopped drinking that damned cool-aid. AMD lost it, for me, when Intel decided it had enough of them and went to a more frequent product update schedule. AMD also lost out when they decided to purchase ATI (ATI also lost out in my eyes). With what I tend to run, for software and hardware, the Intel chips are just a better option/choice. I also used to build systems (using AMD processors) back when you could do it for less (or at least the same) money than you could purchase a pre-built system for. Those days are gone now, since by the time you get everything lined up that comes with systems these days, you're probably not saving anything. Plus, a lot of the systems you purchase have a single warranty that covers everything. Unlike when you build a system... You might have x years for the mobo, y years for the CPU, z years for the hard drives, etc... Of course, any issue that comes up, the manufucatures can always point fingers at another's product for being the reason, making repairs more time consuming. Just not worth the hassle anymore. More time using, less time getting my hands shredded on cases (unless you get one of the quality cases, which throws the savings you could have had by building the system right out the window).

VMware VCP4

Consider awarding points for "helpful" and/or "correct" answers.

Hosted Systems Engineer IV (VMware environment)
Brewing beer again!
0 Kudos
Oracle11g
Contributor
Contributor

Hi golddiggie / rpcblast

Thanks for your reply.

Indeed I'm trying to size up a basic VDI using just 1 physical host server, which will be just running 10 x WinXP VMs and some word processing softwares. Nothing too fanciful for a start. If each VMs were to assign 1GB RAM and 15GB space, does it mean that I need a total of min 10GB RAM and min 150GB space? On top of it, of cos, I need to cater for ESXi 4 as well as spare space for more users.

Will a Quad Core (E5500 series), with 14GB RAM, 4 x 146GB (with RAID 5) do the job?

Pardon me again as I'm really new to this wonderful technology. I would

just need ESXi 4 (include VSphere and VCentre) for just a single host

setup? Or I would just need one of them?

0 Kudos
golddiggie
Champion
Champion

Since you'll also need to run things like a vCenter Server, the View servers, etc. These can be run on the same host server, so you need to account for those parameters as well.

I would go with 24GB (not 14GB, which isn't a valid size for the 5500 series Xeon's)... The servers using the 5500/5600 series Xeon's typically work in mulitples of 3 for the memory slots (so six 4GB sticks)...

For RAID 5, I would go with six drives. Otherwise, you'll have a performance hit over a RAID 10 array (even with four drives). What speed drives (assuming those are SAS drives) are you going after? You can get 300GB 15k rpm SAS drives now at a reasonable rate. Four of those, in a RAID 10 array, should work well.

I would also go with a dual quad core powered server. Keep in mind, with View, the endpoint's processing power is handled by the server. You'll want as much processor power as is possible on the server. I would also make sure that you have at least 100Mbps (full duplex) from the server all the way to the user's desk.

Is this all for a POC or are you looking to jump right into production (not recommended the first time you do this)... Make sure you have support for the VMware products, so that you can get official help with this. Otherwise, you could make incorrect settings and things could fail horribly. How many endpoints would the final count be, if this went into full production? Keep in mind, with having ten individual client system images to host is stacking up more overhead in more ways than just the servers. You'll need to continue to patch them, administrate them, etc. Part of the beauty of using something like View is that you can reduce the actual system images to just a few (a small percentage of the total system count). You can make changes to those few images and they are in place for all the users that pull down that image.

Review all the documentation found here: VMware View Documentation (for version 4 obviously)...

VMware VCP4

Consider awarding points for "helpful" and/or "correct" answers.

Hosted Systems Engineer IV (VMware environment)
Brewing beer again!
Oracle11g
Contributor
Contributor

Thanks again golddiggie.

You menioned aabout the vCenter and View server. Are these servers going to be the ESXi server which the VMs are to be hosted. I'm planning to do a simple POC with 10 users all referring to 1 Windows XP image. As resource is limited, I have to go with RAID 5. But as it will definitely grow along the way, I planning to start with 4 HDDs instead. But since you recommended on performance issue, could you kindly elaborate more?

The server I'm using is supported on the VMWare HCL list. Its able to run a series of ESXi versions. Since you also recommend a Dual Quad Core for 10 users for now, does it means if I'm planning to scale up to 20 users, it will be advisable to get a server which can support at least 4 CPU?

0 Kudos
a_p_
Leadership
Leadership

Since this is for a POC, I think you don't have to use high end hardware, except you have to buy it anyway and plan to use it for production later on.

The only thing I am badly missing in your configuration is "Battery backed write cache" for your RAID controller, this make a huge difference in disk performance.

You mentioned about the vCenter and View server. Are these servers going to be the ESXi server which the VMs are to be hosted

No, the ESXi is the hypervisor you install directly on your host, vCenter Server and View manager are Windows components. You have to install these on a Windows machine/VM. To be precise the vCenter server is installed on one VM and the View manager on another VM.

André

0 Kudos
rpcblast
Contributor
Contributor

Just to clarrify, vcenter and view are not truly "required" components, Vcenter is a manamagement component. It gives you some additional features, such as templating, vmotion etc(depening on your edition). View is the VDI manager.broker, which streamlines things for a VDI deployment. It includes things like dekstop pool management, connection brokering(via the web and internal server), etc. Composer is a view addon that gives you linked clone capability

0 Kudos