VMware Cloud Community
Datto
Expert
Expert

Datto's Inexpensive VMware FT Capable Lab Hardware

For those of you interested in creating a low-cost

(relatively speaking) lab for VSphere 4.0 that will also do VMware FT, here's

what I've used for my three white box VMware FT setup.

White Box 1 -- AMD AM2 Opteron 1354 (quad-core single socket

CPU to save money - CPU cost is $85.00 each delivered from Newegg), Asus M2N-LR

motherboard (about $100 delivered Open Box from Newegg, much less on eBay when

available) and you'll need to get the latest M2N-LR BIOS installed but you may

not be able to flash the board using the Opteron 1354 since the board may

require a lesser processor to flash the BIOS -- I had a low-end AM2 3600 CPU

sitting around and used that for flashing, then put in the AMD Opteron 1354),

two PCI-X dual port Intel gigabit cards (I got mine for $16.50 delivered for

each dual port card on eBay -- this combination of add-in NICs and the two

gigabit NICs on the motherboard that work properly with ESX 4.0 give me six

physical gigabit NICs in the white box), 8GB of non-ECC memory (4x 2GB GSkill

PC2-6400 DDR 800 -- the board will take ECC memory if that's your preference

but I had this GSkill memory already), 20G Maxtor IDE drive for $15.00 each as

the ESX 4.0 boot drive. For CPU fan/.heatsync I'm using a Zerotherm NV120

($50.00 from Newegg or about 60% of that used on ebay when available). I don't

put a permanent CD/DVD drive into any ESX host in my lab and just put the

CD/DVD drive temporarily in when I need it. I also never use a permanent floppy

drive and just plug in a USB floppy drive if I need it for say BIOS flashing.

White Box 2 -- identical to White Box 1 above

White Box 3 -- AMD AM2 Opteron 1356 (also a quad-core single

socket CPU to save money and it's within the 400MHz speed difference limit required

by VMware FT between CPU speed in White Box 1 & 2 above), Asus M2N-L

motherboard, combination of PCI-e dual port Intel gigabit NICs in the two PCI-e

16X slots and straight PCI 1000MT gigabit NICs to get to a total of 6 physical

gigabit NICs in the white box, 8GB of ECC memory (this CPU, motherboard and

memory came as a bundle I'd bought on eBay). Note I couldn't get the two

existing Marvell physical gigabit NICs on the M2N-L motherboard to work with

ESX 4.0 and haven't had time to chase down a NIC driver shoehorn process for

the Marvell NICs so I just disabled the Marvell NICs in the system BIOS for now

and use the other PCI-e and PCI gigabit NICs for my FT purposes. The CPU

Heatsync is also a Zerotherm NV120. Boot drive is also a Maxtor 20GB IDE drive.

Regular VMs (non-COS VMs) sit on an Openfiler 2.3 box

providing shared iSCSI storage for the cluster of white boxes. One gigabit NIC

on each box is dedicated to FT Logging-- the likely max number of FT Primary

and Secondary VMs on any single box is likely three to six of these FT Primary

or Secondary VMs before FT logging might get swamped. So far I'm running a

total of five FT protected light-duty VMs in the cluster and they seem to have

no problems and FT does correctly transfer over to the Secondary if the Primary

FT protected VM fails. I'm also running non-FT VMs in the cluster and there are

also two other ESX 4.0 boxes in the same cluster that are not FT capable (they

have AM2 Kuma processors which are VMotionable between the Opteron 1354/1356 CPUs

but ESX 4.0 needs a keystroke hit on the bootup on those Kuma boxes to keep the

bootup from stopping -- something having to do with the Kuma / M2N-E

motherboard combination used in those non-FT capable white boxes and ESX 4.0 --

if I put a standard AM2 Brisbane CPU in those M2N-E boxes they boot normally

for ESX 4.0 (but won't VMotion straight-away with the Opteron 1354 / 1356 CPUs)

so the problem is with using Kuma processors and ESX 4.0). I don't down ESX

boxes much so this isn't a problem for me at the moment.

Note that the M2N-LR boxes will not go into Standby Mode for

unknown reasons (regardless of which NIC is the Standby-capable VMkernel NIC)

but the M2N-L box will sometimes go into Standby Mode so it's likely a BIOS

issue with these Asus motherboards I suspect. The Asus M2N-E / Kuma CPU

motherbaord/CPU combination have no problem going into Standby mode but with

the Kuma processors in the M2N-E motherboards the boot back up process needs a

keystroke hit so that's a problem.

Also note the huge Zerotherm NV120 heasyncs cover the

closest (the first) motherboard slot in every motherboard I use. Some people

have just trimmed the fins of f the NV120 to make room as necessary.

VCenter runs as a VM on another cluster.

These are not a supported configuration so don't go using

this in your company production setup. Hope this helps folks wanting to build a

relatively economical VMware FT lab setup.

Datto

Reply
0 Kudos
12 Replies
Datto
Expert
Expert

On a lark I decided to rig up the AM2 Kuma 7750 / Asus M2N-E white boxes in my personal lab to see if they'd also do VMware FT since those Kuma CPUs do correctly VMotion with the Opteron 1354/1356 CPUs that are VMware FT capable.

The AM2 Kuma 7750 CPUs will do VMware FT. Ha. A surprise -- the Kuma's must have the necessary VMware FT CPU instruction set built in also like the Opteron 1354/1356 CPUs do.

Those two Kuma 7750 CPUs came free with a special hard drive deal from Newegg. As I remember, they're $60 at Newegg when Newegg has them on-sale.

So that gives me two more VMware FT capable white boxes in the personal lab to complement the existing three VMware FT capable Opteron 1354/1356 boxes (total 5 boxes in the ESX personal lab cluster).The lab boxes at the office are all IBM x3650 and x3650 M2 and FT capable from the specs but I have them all tightly scheduled right now so there's no time available to fool around with them (production ESX boxes are also all IBM x3650 and x3650 M2).

What tipped me off to the possibility that the Kuma 7750 CPUs might work with VMware FT is when VC wanted to move the secondary VM of a previously setup FT VM to the Kuma 7750 boxes when I wanted to down one of the Opteron 1354 boxes. At the time I'd thought that was strange, thinking the Kuma 7750 CPUs shouldn't be capable of that. After thinking about it for a while I'd decided to just setup VMware FT fully on the Kuma 7750 CPU boxes and vi-ola it worked.

Just thought I'd pass it along in case anyone else is looking to build some white box VMware FT capable boxes.

Datto

Reply
0 Kudos
Datto
Expert
Expert

If you want to use your lab to create virtual ESX 4.0 hosts and run VMs inside those virtual ESX 4.0 hosts, here are articles explaining how to do this by running ESX 4.0 under VMware Workstation or running ESX 4.0 under ESX 4.0 -- my experience with running a virtualized ESX host under VMware Workstation is that yes, it will work but it's dog slow and I'm not that patient. But for those with little to no money it might be worth looking into if you can meet the minimum hardware requirements for the box that will run your virtualized ESX 4.0 hosts:

http://xtravirt.com/xd10089

http://www.hypervizor.com/2009/07/vsphere-in-a-box-a-virtual-private-cloud-blueprint/

http://www.vcritical.com/2009/05/vmware-esx-4-can-even-virtualize-itself/

http://technodrone.blogspot.com/2009/06/esx-40-running-vsphere-lab-part-1.html

http://technodrone.blogspot.com/2009/06/esx-40-running-vsphere-lab-part-2.html

http://communities.vmware.com/docs/DOC-8970/

http://www.vladan.fr/vsphere-4-in-vmware-workstation/

Datto

Reply
0 Kudos
V1RTU4L
Contributor
Contributor

Hi Datto,

thanks for that guide.

Is there a reason why you use OpenFiler instead of for example FreeNAS ? I do not really know OpenFiler. Do you know it has a better performance than FreeNAS ?

thx in advance

Reply
0 Kudos
Datto
Expert
Expert

OpenFiler is just what I happened to have taken the time to learn. OpenFiler has worked very well for me over the past couple of years and since I've setup so many OpenFiler servers, the setup is now quick for me.

Datto

Reply
0 Kudos
Symbion_Tech
Contributor
Contributor

I set up two openfiler boxes recently with DRBD and heartbeat to create a highly availabile SAN. I've done it many times now and there's an excellent guide out there:

I think it's the same article that i used from the which appears to be down just now.

I built my whitebox last year with the following spec:

ESX 3.5 U2 running on:

ASUS P5WDG2-WS Pro (2 x PCI-X)

Q6600

8GB OCZ ReaperX DDR2 1066 (running at 800 as overclocking to 1066 was not stable)

Intel MT 1000 Dual Port (in PCI-X slot)

Adaptec 2130SLP PCI-X 128MB RAID Controller

2 x 15k 36GB in RAID 1

2 x 15k 146GB in RAID 1

HP rm435 DVD-ROM

Now i'm looking to put together two boxes for 4.0 and one for openfiler (two would be a waste just for my home lab) the same as yourself.

I was going to go with a P45 based board and get 16GB per box (OCZ PC6400 4x4GB for each one). Q9550's for the processors and i looked up vm-help and there's pic-e dual & quad port nics that work with 4.0. At least six ports per box (preferably 8). The on-board nics are no use on all the 4.0 compatible P45 boards i've looked at. Your setup is excellent and far less expensive but I think it may be worth the extra cost to get the 16GB.

For my openfiler box i'll use my existing whitebox and will replace the SCSI controller and drives with 8 X 1GB SATA II drives (not decided yet on which ones) and a PCI-E RAID controller.

What do you think?

Reply
0 Kudos
Datto
Expert
Expert

I was able to get an AMD Phenom X4 CPU in an Asus M2N-LR (BIOS 515) motherboard to correctly VMotion and utilize VMware FT with the other CPUs/servers in the same cluster in my home lab (the cluster has AMD Athlon X2 Kuma 7750 CPUs and AMD Opteron 1354/1356 CPUs -- see above for details).

Datto

Reply
0 Kudos
32bit
Contributor
Contributor

I've got the M2N-LR running with the latest BIOS 0515 and a Phenom 9650 (the 4x2.3Ghz thing) but ESXi doesn't liky computer (have a look at the attachement).

Does someone have a solution für my problem?

Reply
0 Kudos
Gabrie1
Commander
Commander

Just wanted to add my whitebox that supports FT:

http://www.gabesvirtualworld.com/?p=531

This is what I bought:

- Asus Barebone V3-P5G45 iG45, SATA2 RAID, HDMI (zilver)

- Intel Core 2 Quad Q9400 2.66GHz FSB1333 Box

- Intel PCI-e Adapter Pro1000PT Dual Port LAN 1000Mbit Bulk

- 2x OCZ 2×2GB DDR2 SDRAM PC6400 CL5.0 Platinum (Total 8GB)

- Western Digital 640GB SATA300 16MB, WD6400AACS

(Warning, the onboard nic will not work in ESX, I therefore added the dual port nic as an extra)






http://www.GabesVirtualWorld.com

http://www.GabesVirtualWorld.com
Reply
0 Kudos
msalmon1
Contributor
Contributor

From where did you buy the "Asus Barebone V3-P5G45 iG45, SATA2 RAID, HDMI (zilver)"? I am not able to find it online in the States.

Reply
0 Kudos
Gabrie1
Commander
Commander

From http://www.4launch.nl






http://www.GabesVirtualWorld.com

http://www.GabesVirtualWorld.com
Reply
0 Kudos
Datto
Expert
Expert

32bit - I haven't installed ESXi 4.0 on my M2N-LR systems but -- Wild Guess -- If the version you've installed is later than the GA version of ESXi 4.0, you might try installing from the original GA version of ESXi 4.0 -- the one released on May 21st -- and see if that allows you to boot. Also, you might try installing regular Classic ESX 4.0 with the Service Console and see if that goes in okay.

Datto

Reply
0 Kudos
Datto
Expert
Expert

32bit -- one other thing -- on a different motherboard this morning (not the M2N-LR) I had to turn off all power saving features in the BIOS of that other board to get the install to go in correctly. Seems the power savings in that other board was modifying the speed of the CPU and that caused the install of Classic ESX 4.0 to bomb. Once itne install was in place I re-engaged the power savings in the BIOS and all is working okay.

Datto

Reply
0 Kudos