VMware Cloud Community
msalmon1
Contributor
Contributor

ESX Home Lab

I am getting ready to setup an ESX lab at home. It will be a two node cluster. I will start out putting ESX 3.5 on the servers and then upgrade to vSphere at some point. I will be going out this weekend to drop about $300 on the specs below (times 2). What is your opinion of the specifications below?

CPU: AMD AthlonTM X2 Dual-Core Energy Efficient 5050e $65

Motherboard: Asus M3N72-D AMD AM2+ Nvidia Nforce 750a Motherboard $105

HD: 160GB SATA $60

Memory: 4GB $60

NICS: 2 x Dual port PCI or PCIe $60

DVD Rom: $40

ATX Case: $30

Storage: Openfiler

Reply
0 Kudos
11 Replies
Datto
Expert
Expert

If you're wanting to do VMware FT with your home lab, you should make sure you can use those 5050e CPUs for VMware FT. Here's my setup with Athlon Kuma 7750 CPUs, Phenom 9650 CPUs and Opteron 1354/1356 CPUs:

http://communities.vmware.com/thread/218140

Also, you're probably going to want to make sure those dual NIC cards have WOL capability on at least one of the ports (so you can put the machine into Standby Mode) and that both ports provide Gigabit capability. Stay with Intel dual NIC cards, not someone else's brand. Stay away from Realtek NICs for any ESX hosts.

You should verify the chipset of that motherboard will work with both ESX 3.5 and ESX 4.0. Search the Internet for ESX white box and you'll find the listing of what "likely" works for a white box and what doesn't work and the caveats.

Also note that more memory will likely provide more capability for you so max memory amount on your motherboards will likely be more important than CPU horsepower. If you can find a motherboard that can handle 16GB of memory using 4GB dimms in the future, you may help yourself with better longevity for your boxes. Otherwise, I'd stay with at least an 8GB memory capability.

If I were you, I'd put the OpenFiler on a separate box, not in a VM, so that you can get some performance out of OpenFiler. Note you can use the cheap Realtek 8169 NICs and bond them together on the OpenFiler box (note that Realtek won't work for your ESX boxes -- you should stay with Intel gigabit NICs on your ESX hosts).

Datto

Reply
0 Kudos
Datto
Expert
Expert

Here is a white box with ESX listing of what supposedly works with what version of ESX and the caveats:

http://ultimatewhitebox.com/motherboard

Here are some suggested links from others in this forum message: http://communities.vmware.com/thread/98225

Hardware recommendations to build a cheap ESX server -

White box/Home ESX system -

ESX on non-supported hardware to learn with -

Community supported hardware/software for Vmware Infrastructure -

Reply
0 Kudos
msalmon1
Contributor
Contributor

I am willing to spend up to $450 ton each node....Since your whitebox 1 requires a different cpu before you can upgrade the BIOS, can't I just go with two of your whitebox 3 configuration. It seems that servers will work with 3.5, 4.0 and all the ft, vmotion other functionality will be available? Also, what kind of specs am am looking at for the openfiler box? Than you.

Reply
0 Kudos
Datto
Expert
Expert

For the OpenFiler boxes I'm using some low-grade Abit KV7 motherboards with an Athlon 2400 in each with 1.5 GB of memory, bonded cheap Realtek Gigabit 8169 NICs and a bunch of RAIDed SATA or IDE drives (SCSI drives, more memory or a faster new box would be much better of course but for my needs what I have is sufficient).

Two M2N-L motherboards for your ESX VSphere hosts each with an Opteron 1354 or 1356 CPU would be fine and with the latest BIOS for the M2N-L motherboard you should be able to do DRS/HA and FT using shared storage from the OpenFiler box. I can put the M2N-L into standby power saving mode but not consistently -- haven't had time yet to chase down why it works correctly sometimes and other times it won't come out of standby mode (note that you may need three boxes in your ESX cluster to be able to put one into Standby Mode -- at least that has been my experience).The NICs in the M2N-L box are the standard Intel NICs and the VMKernel port is on a NIC port that supports WOL. Also, I'm successfully using the VSphere 4.0 Dynamic CPU Power Saving Policy from the Enhanced PowerNow capability on the M2N-L motherboard with Opteron 1356.

Note the two Marvell gigabit NICs on the M2N-L won't work with VSphere yet so you'll need to get some add-in NICs (Intel 1000MT PCI NICs or Intel Dual Port 1000PT PCI-e X4 NICs would be fine -- since you have 2x PCI-e 16x slots (running in 8x when both are engaged as I remember) you should have enough PCI-e slots to get a good number of NICs into the box -- there's also 2 PCI-e 1x slots you could use if you need to for NICs).

I'm using the video provided by the built-in motherboard video. Note that the first PCI-e 16x slot is very close to the Zerotherm NV120 CPU fan but it does fit without touching the Intel dual port NIC that is installed into that first PCI-e slot. I have very high ambient temps so CPU fans are important -- if you have room temperature you probably wouldn't need such a fancy CPU fan like the Zerotherm is.

I've had trouble finding an Opteron 1352, 1354 or 1356 Opteron CPU lately -- might not be made anymore. Also, I haven't seen many M2N-L motherboards for sale lately either. My M2N-L came with the latest BIOS so I didnt have to flash it like I had to do with the other motherboards.

The other good thing is the M2N-L motherboard takes a standard ATX power supply with a 24 pin connector. Also, I've put some standard DDR2 PC2-6400 G.Skill memory in and that works fine. The board came with what was described 8gb of ECC memory but on closer look I think it was just standard DDR2 non-ECC. It worked so I didnt look closely at it until recently when I started swapping parts for a different box that had other duties.

Datto

Message was edited by: Datto

Reply
0 Kudos
msalmon1
Contributor
Contributor

Datto. Thank you for your responses, you have been a great help. One other question, what about speed. is there any slowness in your cluster, given that you are not using enterprise class servers?

Reply
0 Kudos
Datto
Expert
Expert

As more VMs get loaded onto the ESX hosts in my home lab and more VMs traverse to and from the OpenFiler storage, performance does slowdown a bit. The bottleneck is definitely the performance of the OpenFiler storage but that's all the money I'm willing to put toward a home lab at the moment so it's good enough. At the office it's all professional-level IBM and EMC equipment so I don't have any performance problems at the office nor in the data center.

Here's a tip for using OpenFiler on slow hardware -- make smaller LUNs with fewer VMs per OpenFiler LUN and then create more LUNs on the OpenFiler box. That will get you more VMs per OpenFiler box with better performance than piling all the VMs into a single OpenFiler iSCSI LUN. I usually put no more than 4-5 light-duty VMs on a single OpenFiler iSCSI LUN on the slow OpenFiler hardware that I'm using at the moment in my home lab. It's more management but overall it's a better payoff than trying to muscle all the VMs off a single LUN or two.

Datto

Reply
0 Kudos
Datto
Expert
Expert

Also, I have returned to experimenting with nesting ESX 4.0 VM hosts running on a physical ESX 4.0 server, then running VMs on the nested ESX 4.0 VMs. This gives me capability to create throw-away ESX 4.0 hosts running in virtual machines if I want to look at something interesting (or try out some new PowerCLI scripts) but don't want to endanger the physical ESX hosts in my home lab. It's been quite a bit of work to get that nested environment set up (an embarrasing amount of after-hours time invested in it) but I'm starting to utilize the nested ESX 4.0 VMs more frequently now.

Datto

Reply
0 Kudos
msalmon1
Contributor
Contributor

I have not been able to find a lot of the systems referenced in various blogs, in the USA.

It appears that all the good systems at a reasonable cost are in Europe. Why is that?

Reply
0 Kudos
Datto
Expert
Expert

I guess Europeans (and elsewhere) have to be more miserly with capital expenditures (personal or corporate). North Americans seem to want to purchase pre-built servers (first line or eBay second hand) from the Big 3 so there isn't any effort needed to build the box/experiment with less expensive components -- that approach removes most of the risk that one or more items in a white box might not be compatible with ESX now and in the future (at a somewhat higher cost per server).

Of course, we're talking home lab here, not production equipment where someone's job depends upon it working properly and VMware will come to the rescue if something goes bad.

Datto

Message was edited by: Datto

Reply
0 Kudos
msalmon1
Contributor
Contributor

Well, I am in a bind! want to build two systems that will work with all of ESX 3.5/4.0 features, but I can't afford high servers. I would prefer to buy the systems in the states, where returns and replacements are better accommodated, but the systems that I can afford are not readily available here. Do you have any other recommendation for a home lab?

Reply
0 Kudos
Datto
Expert
Expert

Here are my Top Ten money saving suggestions for building an ESX home lab for cheapskates -- note these suggestions are for people who have more time available than they have money available. If you're looking for something that will provide the most compatibility with the least amount of risk and the least amount of time invested, then you should buy a pre-built Dell, IBM or HP server that is listed as being compatible with ESX(I) 4.0 on the VMware HCL (Hardware Compatibility List). Don't use any of the ideas listed below for anything remotely close to production.

1) Plan to buy/build three servers for an ESX home lab -- two ESX servers and one low-end server for use with OpenFiler (the OpenFiler box will provide shared storage where your VMs will be located). Use SATA drives in the OpenFiler server and have OpenFiler provide the Linux RAID for those hard drives. This will provide the capability to do DRS/HA and if your processors and motherboard purchases can support VMware FT, you'll be setup to do that too. On the OpenFiler volumes, setup OpenFiler as an iSCSI Target and locate no than 4-5 VMs per OpenFiler LUN -- OpenFiler allows you to carve up the available SATA space into LUNs of any size -- just don't put more than 4-5 simultaneously running VMs on any OpenFiler LUN and you'll be able to get more VMs on your OpenFiler box than if you put all the VMs on a single LUN. Also in OpenFiler, use File Mode instead of Block Mode and use Write-Back instead of Write-Thru (see Item 10 below). OpenFiler is free and available at

2) Don't put hard drives into the ESX hosts but instead buy four 1GB USB sticks and boot ESXi from the USB sticks. Use two 1GB USB sticks for ESXi 4.0 and two 1GB sticks for ESXi 3.5 if you're interested in learning both versions -- otherwise you only need two 1GB USB sticks). If you're wanting to learn Classic ESX (with the Service Console) you'll need to put a 20GB IDE or SATA disk into your two ESX hosts. Directions for booting ESXi from a USB stick are at

3) Run your VCenter in a VM and set the Memory Shares and CPU Shares for your VM to High.

4) Use only Intel brand gigabit NICs in your ESX servers (Intel 1000MT PCI gigabit NICs work fine in most boxes and are inexpensive) and use a pair of bonded $5.99 Realtek 8169 PCI NICs in your OpenFiler server. Don't use Realtek NICs in any of your ESX hosts. Marvell motherboard NICs don't seem to work wtih ESX(i) 4.0.

5) Use AMD processors (to save money) and find processors and motherboards that will work with VMware FT and ESXi 4.0/ESXi 3.5. Use the White Box lists (Google for "Whtie Box" and ESX or "White Box" and "VSphere") -- see the above forum entries or search fhe VMware forums for the list of websites that indicate which white box parts work with which versions of ESX. Don't buy Foxconn motherboards if you're going to use ESXi because it's not likely they'll be able to boot from a USB key. Note that the amount of system memory available to ESX will likely be more important to you than CPU horsepower so don't spend money getting some powerful impressive CPU -- A dual-core AMD Athlon 7750 CPU is plenty powerful and has the instructions to do VMware FT (assuming your motherboard can properly handle an Athlon 7750 processor used for ESX). Try to get a motherboard that can provide at least 8GB of system memory and don't buy any motherboard that has a maximum memory of less than 4GB -- you'll be wasting your time.

6) Consider buying most things from eBay, become knowledgeable on eBay search parameters (use parenthesis and dashes to include items that might not be listed and to eliminate junk that will show up in a search). Be patient in order to find a good deal on the hardware -- it may take several weeks to find low ball deals on eBay. Don't buy anything that is labeled "as-is" or "for parts" or "no returns" or has an excessively high shipping cost (that would indicate there's a likely problem with the part). Don't buy from anyone on eBay that doesn't have at least 50 transactions to their name with at least an overall score of 98% or higher from those transactions.

7) If you have to buy new parts (such as a cheap gigabit network switch), buy those parts from Newegg at or if you're near a Frye's store buy the cheap parts there.

😎 Don't buy a fancy case -- just get a $19.95 case (on-sale delivered cost) and take both sides off the case permanently in order to maximize cooling. Stop being bothered by noise from the ESX servers and OpenFiler computers with their sides off the cases.

9) Don't buy a high priced obnoxiously powereful power supply -- you just need a 300 watt power supply for each of your three boxes. If you want to buy a good brand of power supply, stay with Antec brand for power supplies.

10) Plan on buying a UPS at the very beginning (UPS = Uninterruptable Power Supply) -- this will keep all your hard work at learning virtualization from being trashed if the power goes out or has a momentary blip. Just get a cheap CyberPower 1000 UPS -- you don't need an APC brand or anything fancy -- you just need something that will keep your boxes up for two minutes. If the power goes out so often and for so long that a CyberPower 1000 UPS won't keep them up, then you have a power problem at your location that might not be suitable for a cheapskate administrator.

BONUS

11) If you're relegated to only using a laptop for your home ESX lab, get an Intel VT-capable laptop wlith at least 4GB of system memory and run your ESX farm under VMware Workstation. The forums here have instructions on how to do that with VMware Workstation 6.5.x and VMware Workstation 7.x. Note this setup will likely be exceeding slow.

Anyone readying this with even more useful cheapskate ideas, just add your comments below.

Datto

Reply
0 Kudos