VMware Cloud Community
bandrews
Contributor
Contributor

ESX4 inexpensive hardware, CPU and MOBO suggestions

I'm putting together an estimate of the hardware costs for a vSphere ESX4 project. Imagine a google-like rack with just motherboards and power supplies in trays probably booting off USB sticks or PXE into ESX4 all connected to a NAS for shared storage with dual 1000meg nic's and jumbo packets.

-


Q1) What is the least expensive brand name motherboard and Intel processor

that would give us all the HA and FT features vSphere offers? So far I'm looking at an ASUS P5BV-C motherboard and an Intel Xeon E3110 Wolfdale 3.0GHz processor with 4GB of ram could run under $400 per node, but I'd prefer to find a dual processor motherboard. Any better suggestions? I'm looking at this chart for help picking the processor: http://www.gabesvirtualworld.com/?p=456.

Q2) If I'm not mistaken EVC only works with Merom, Penryn, and new Nehalem Xeon processors. Even without EVC I should be able to take full advantage of moving VM's between nodes without downtime right? And as long as the processor is compatible with FT and HA is enabled and working I should be able to do live VM migrations correct? I believe EVC simply enables one to migration VM's between hosts with different CPU models, right?

So if two hosts have different Xeon processors like an Intel Xeon i7 Nehalem and an Intel Xeon Core 2 Merom I be able to run a VM in FT mode across both of those hosts BUT only with EVC, correct?

Reply
0 Kudos
35 Replies
TomHowarth
Leadership
Leadership

Moving this post to the community supported hardware forum

If you found this or any other answer useful please consider the use of the Helpful or correct buttons to award points

Tom Howarth VCP / vExpert

VMware Communities User Moderator

Blog: www.planetvm.net

Contributing author for the upcoming book "[VMware vSphere and Virtual Infrastructure Security: Securing ESX and the Virtual Environment|http://my.safaribooksonline.com/9780136083214]”. Currently available on roughcuts

Tom Howarth VCP / VCAP / vExpert
VMware Communities User Moderator
Blog: http://www.planetvm.net
Contributing author on VMware vSphere and Virtual Infrastructure Security: Securing ESX and the Virtual Environment
Contributing author on VCP VMware Certified Professional on VSphere 4 Study Guide: Exam VCP-410
Reply
0 Kudos
bandrews
Contributor
Contributor

So far I have two builds...

1) Intel Xeon Dual Core E3110 Wolfdale 3.0Ghz on an ASUS P5BV-C motherboard with 4gigs of RAM for about $392.96 each. That's $65.49 per Ghz.

The 3110 is FT compatible and can run in lockstep with Xeon 3100's, 3300's, 5200's, 5400's and 7400's but not 5500s according to .

2) AMD Opteron 1354 Quad Core 2.2Ghz (AMD Opteron Generation 3) on an ASUS M2N-L with 8gigs of RAM for about $383.95 each or about $43.63 per Ghz and twice as much RAM.

The 1354 is FT compatible and can run in lockstep with AMD 1300's, 2300's, and 8300's.

Neither are EVC compatible but building the cluster for vSphere from the ground up we have the advantage of buying the same processor make and model for everything so vMotion, FT, and HA shouldn't be a problem unless I'm missing something, correct?

Reply
0 Kudos
colindunn
Contributor
Contributor

I'm interested in this too, for a home lab running ESXi 4.

If you want vCenter with HA, DRS, and vMotion, you'll need shared storage (NFS or iSCSI) and at least 3 NIC ports per system (one management/storage, one VM network, one vMotion). For a test environment you may be able to combine management and VM network on the same vSwitch and get away with two ports; for a Google-like rack full of trays, I'd want at least 3-4 NIC ports per system.

I'd like to build up a 2-node cluster, each with two quad-core CPUs and 8-16GB of RAM. Put the hosts in cases, and provide shared storage (SATA RAID on another box). Connect it all through a managed gigabit switch.

Alternatively, I might build up a single host and run the storage local. I want the VM environment so I can run a test lab at home for running various x86/64 OSs and test environments for various MS products (Exchange 2007, SQL 2008, etc.).

Haven't determined all the hardware I want yet.

Reply
0 Kudos
bandrews
Contributor
Contributor

N82E16819105212

AMD Opteron 1354 Budapest 2.2GHz 4 x 512KB

L2 Cache 2MB

74.99

N82E16813131256

ASUS M2N-L AM2 NVIDIA nForce 570 SLI ATX

164.99

N82E16820227291

OCZ Platinum Edition 8GB (4 x 2GB) 240-Pin DDR2 SDRAM DDR2 800

99.99

N82E16817165023

Linkworld LPJ2-23-P4 430W ATX12V

12.99

N82E16835233022

XIGMATEK EP-CD901 92mm Sleeve CPU Cooler

13.99

Total: 366.95 (+S&H)

That's for 8.8Ghz across 4 cores and 8GB of ram. I didn't go with ECC memory or a name brand powersupply because the idea is to combine a large number of inexpensive nodes and use HA/FT to protect VM's from hardware failures. In fact I can probably use one power supply to power two nodes since there is no CD or HDD or even case fans but I'd have to test the pull of a single motherboard in a burn in test to see how many watts it pulls. The left column is a newegg.com part number by the way.

I think this will give me an inexpensive ESX4 host that is cheap enough to just quickly replace failed hardware. I'm looking at ESX4 host (with vMotion and HA/FT support) licensing costs now and does it really cost $2000 for vSphere Advanced for a single CPU? That's over 4x the cost of the hardware for a single host in the cluster! Am I reading that correctly?

Reply
0 Kudos
glynnd1
Expert
Expert

I'm looking at ESX4 host (with vMotion and HA/FT support) licensing costs now and does it really cost $2000 for vSphere Advanced for a single CPU? That's over 4x the cost of the hardware for a single host in the cluster! Am I reading that correctly?

Yup, vSphere Advanced is with 1 year of Platinum support is $2,806 per CPU.

My old company has twelve dual-CPU server plus vCenter which runs close to $50k total with list prices. And while it did take a lot of convincing that this was a good way to go, it has been of great value. We did start with only four servers, but grew. You also have not made any mention of the shared storage you plan on using.

bandrews
Contributor
Contributor

We already have a working NAS with both NFS and iSCSI, about 5TB of redundant storage.

Reply
0 Kudos
RayJK
Contributor
Contributor

Be aware that, as far as I can tell from AMD and Wikipedia sources, all Opteron Quad-core are Revision 'B' and if you want to run 64 bit guests you will need Rev 'D' or later. See VMware KB Article 1901.

Reply
0 Kudos
bandrews
Contributor
Contributor

I can't confirm that ALL AMD quad cores are Rev B. I can confirm that the AMD Operton I have listed above as well as the 82XX, 22xx, and 12XX are all Rev F. And as long as the AMD processor supports AMD-V it should support live migrations and full FT. But I'd like to confirm that. I'd like to find an inexpensive processor to use with vSphere but it has to be able to support 64bit, FT, and everything else. I haven't used anything but Intel processors for many years so I'm worried about picking the wrong one but from what I've seen so far I'm still ok with the 13XX series.

Source: (Page16)

Reply
0 Kudos
colindunn
Contributor
Contributor

VMware's KB article (1901) says that when the Opteron processor line moved to the 90nm process, the 64-bit guest compatibility issue (really one of memory protection in 64-bit mode) was resolved.

Anything that was created later on even smaller processes (65nm or 45nm) also would be a new enough revision to support 64-bit guests.

I'm strongly considering getting this Opteron / mobo combo but have concerns about the ASUS motherboard. There are reports of these boards shipping with a BIOS that is incompatible with the Opteron 1354. Those board have to be upgraded by putting in a different processor, doing a BIOS flash, and then installing the Opteron 1354. I'd rather get something that I don't have to keep an extra CPU around.

Also appears Phenom II X4 processor/motherboard combos have compatibility issues, particularly with SATA and network. Anyone got suggestions of a Socket AM3 solution that is compatible with ESXi 4.0 and is compatible with ESXi's built-in SATA and NIC drivers (if such a beast exists)?

Reply
0 Kudos
RayJK
Contributor
Contributor

Thanks to colindunn for pointing out the bit about 90nm processes (and later) in the note at the end of VMware's KB article (1901). I'd missed that.

This means that any currently available Opterons should be ok for 64 bit clients.

There are some good offers out there at the moment on quad Opteron systems from HP, Dell and others.

Maybe the article needs revising to move the reference to a more prominent position. VMware are doing AMD (and us) no favours by 'hiding' this important information where it is at present.

We appear to have got to the stage with VMware that Windows was at a while back. By this I mean that there is a lack of drivers which would enable the use of a wider variety of hardware. With the advent of widely available, less expensive hardware there is now a much bigger potential market for virtualisation when the drivers become available. The current drivers cater for the hardware that satisfies the needs of those who have big setups with high throughput. More modest needs can be met with more modest and less expensive hardware.

Reply
0 Kudos
colindunn
Contributor
Contributor

Another link of interest to everyone following this thread:

With ESX / ESXi 4, the really cheap Dell servers (T100/105, T410) become supported for VMware! These are OK for workstation / test lab applications. I wouldn't run anything in production that didn't have hardware RAID, hot-swap disks, and redundant power supplies.

Still playing with configurations on their site to see what gives the most bang for the buck. The T105 is a single-socket server that can take a quad-core Opteron and up to 8GB RAM.

Reply
0 Kudos
bandrews
Contributor
Contributor

I looked it up and the T105 with a Quad Core AMD® Opteron™ 1354; 2.2GHz and 8GB of RAM adds up to $726 (+S&H), and that's with only 1 NIC.

Reply
0 Kudos
colindunn
Contributor
Contributor

I ran a config for the T105 server configuration this morning and it was $558. I went through Dell's Small Business site, and used the built-in SATA for the storage controller, the default 250GB SATA disks, and chose the default 1-year warranty.

Larger disks, a SATA RAID controller, or a 3-year warranty will add about $200 each. The cost of the server could double just by adding these things. (I wonder if I could order a 1-drive configuration and just add more storage later, since Dell's SATA disk prices are high.)

Dell charges a lot for the NIC ($200 for a dual-port Intel); I think there are likely less expensive sources of Intel NICs. For a home lab, you probably won't be deploying more than 3 NIC ports. For a production cluster, I'd put in 1-2 Intel quad-port NICs.

Still trying to find a way to get a 16GB RAM configuration without breaking the bank. The problem with Dell is that you then have to jump up to the Opteron 23xx series or go to the Intel Nehalem platform (which commands a hefty price right now) to get a server that accepts that much RAM. Nehalem is also strange in that memory has to be added in 3-DIMM sets (it uses a triple-channel memory controller), so the common configurations are 6, 12, and 24GB. Getting 12 or 24GB requires two processors (8 cores), another big price jump.

I think 4 cores should have 12-16GB of RAM, and 8 cores should have 24-32GB. For my home lab I'm trying to get 4 or 8 cores, and 16GB of RAM (with room to expand to 24 or 32). I want a lot of VMs.

Reply
0 Kudos
glynnd1
Expert
Expert

But do you really need the keyboard, mouse and 17" flat panel monitor? You can also save another $40 by dropping the CPU 100Mhz. This drops it to $499 plus S&H.

It is possible that through ordering over the phone one may be able to get the second HDD dropped and the first disk shrunk to 160GB, which should shave off another few $.

This is starting with the t105-bqdwv1k on the SMB site - the one that is currently labeled a Dell Deal. Trying the same config but starting with the Basic Config results in a higher price, go figure.

And as Colin pointed out, depending on where you start may also result in a different price, so much for simplicity Smiley Happy

All in all, $499 for a quad core with 8gb RAM ain't bad. I was going to also mention the HCL listing, but that Dell doc appears to disagree with VMware's published HCL.

Reply
0 Kudos
colindunn
Contributor
Contributor

Given that Dell's server boards don't have overclocking features, I think it's worth the $40 extra to get a 2.3GHz instead of a 2.2GHz processor. It's a big jump ($270?) to get up to 2.4GHz.

My build was the same as yours ("Dell Deal") except I did include a keyboard and mouse (but no monitor), which added $19 to the price. In practice, I could probably omit all three and run it as a headless server once it's built. So then it's $499 for a 2.2GHz system, $539 for a 2.3GHz system.

Later I'll see if I can come up with another model / config that would allow for 12-16GB RAM. Wish the T105 board wasn't limited to 8GB of RAM...

Reply
0 Kudos
glynnd1
Expert
Expert

I would suggest keeping on eye on the Dell Outlet site as well, you might find something suitable there - granted it is not going to be in the $500 price range, but you might find a 1900 for decent price.

Reply
0 Kudos
colindunn
Contributor
Contributor

Found a T300 build that gives 4 cores and 16GB of RAM for $1,175. The next step up, from 2.5GHz to 2.83GHz, adds $150 to the price. This one has a 3-year warranty.

PowerEdge T300:

Quad Core Intel® Xeon® X3323, 2.5GHz, 2x3M Cache, 1333MHz FSB T3Q25 1 1

Memory:

16GB DDR2, 667MHz, 4x4GB, Dual Ranked DIMMs 16G4D6D 1 3

Operating System:

No Operating System NOOS 1 11

Chassis Configuration:

Chassis with Cabled Hard Drive and Non-Redundant Power Supply NHPNR 1 28

Hard Drive Configuration:

Onboard SATA, 1-4 Drives connected to Onboard SATA Controller - No RAID OBSATA 1 27

Primary Hard Drive:

250GB 7.2k RPM Serial ATA 3Gbps 3.5-in Cabled Hard Drive 250A7K 1 8

2nd Hard Drive:

250GB 7.2k RPM Serial ATA 3Gbps 3.5-in Cabled Hard Drive 250A7K 1 23

CD/DVD Drive:

16x DVD-ROM Drive, Internal, SATA 16DVD 1 16

Floppy Drive:

No Floppy Drive NFD 1 10

Power Cords:

Power Cord, NEMA 5-15P to C13, wall plug, 10 feet WP10FS 1 38

Network Adapter:

On-Board Dual Gigabit Network Adapter OBNIC 1 13

Keyboards, Mice, Displays and Related Devices:

No Keyboard or Mouse Selected NONE 1 4

System Documentation:

Electronic Documentation and OpenManage CD Kit EDOC 1 21

Hardware Support Services:

3Yr Basic Hardware Warranty Repair: 5x10 HW-Only, 5x10 NBD Onsite U3OS 1 [987-9140][990-1817][990-1818][990-2309] 29

Installation Services:

No Installation NOINSTL 1 32

Reply
0 Kudos
bandrews
Contributor
Contributor

Another great place for vSphere certified pre-build servers is , they can't get close to the price of just getting a mobocpuram on a shelf in a rack but they're pretty inexpensive compared to some equally geared Dell PE servers, these can also come with ESX4i pre-installed. Supermicro make excellent motherboards and cases.

I guess I'm getting a little offtrack of the whole inexpensive part of the thread though. So far we can't beat about $45 per Ghz with 8gigs of ram for a fully compatible 64bit vSphere cluster host.

Reply
0 Kudos
VMNEWBEE2009
Contributor
Contributor

Does that mean only opteron series of AMD processors are supported? I got a good deal for HP with AMD Phenom Quad Core and not sure if its supported for VMware.

Reply
0 Kudos