VMware Cloud Community
briguynyc
Contributor
Contributor

Motherboard for esxi 4.1 home server?

I'm working on building a home server based on esxi 4.1 for consolidation, as well as a playground for testing of applications. For this, I'm looking to have SATA RAID, GigE, and obviously will need video support for at least the building and configuration of the server, until I have it online and can manage it remotely.

With that, is there a preferred motherboard out there that includes a esxi 4.1 supported LAN Chip and RAID Chipset as well as onboard video (the i5-700 series cpu does not provide video support -- so I would need an onboard video chip). If not, is there a mobo that people have used that includes 2 of these 3 items -- I can obviously pop in a PCI/PCI-X card for any other functionality.

I am looking at a i5-700 series cpu and at least 8gb of RAM. For storage I am looking at most likely deploying 3 1.5tb or 2.0tb drives in a Raid-5 configuration. Any other comments are appreciated.

Thanks.

-Brian

Reply
0 Kudos
4 Replies
khughes
Virtuoso
Virtuoso

A good source of information on whitebox hardware would be here - http://www.vm-help.com/esx40i/esx40_whitebox_HCL.php

It lists quite a bit of hardware that might have been tested to work

-- Kyle

"RParker wrote: I guess I was wrong, everything CAN be virtualized "

-- Kyle "RParker wrote: I guess I was wrong, everything CAN be virtualized "
Reply
0 Kudos
golddiggie
Champion
Champion

You do understand that you're setting yourself up for a world of issues by going this route, right? For one thing, it's NOT going to be cheap... You'll need a server class motherboard, dual socket being a good minimum. You'll also really want a server grade processor (such as Intel Xeon 5500/5600 or 7500/7600 series for new 5400 series are good if you can get them). You'll want the NIC to also be on the VMware HCL, not just the vm-help site. On top of that, you'll really need a RAID controller with BBWC (battery backed write cache) so that it's a true hardware RAID controller. The vast majority of onboard RAID controllers are software RAID controllers, unless they are shown on the VMware HCL as being fully supported (hardware) controllers. You're best bet is one using the LSI MegaRAID chipset (often found inside servers from HP and Dell)...

You could, also, look on either ebay or the manufacturer's outlet stores (such as the Dell outlet) and pick up a decent base system in a workstation class tower (Dell T7400/T7500 are good choices, get dual processors). You, most likely, will still need to get a hardware RAID controller (ranges from around $300 on up depending on how many drives it will support) such as the PERC 6/i...

SAS drives are a far, far, far better choice than SATA drives too. I would avoid RAID 5 unless you have plenty of drives to feed it, and can overcome the RAID penalty attached with RAID 5... You'll see better performance from a properly configured RAID 10 array compared with the same drives in a RAID 5 array (especially when talking under 6 drives). I would even go so far as to use a single drive in the host server for ESXi to reside on, and then use a SAN (not a NAS) for the VM's to reside upon. I favor iSCSI over NFS for several reasons, but test out both if you want and decide for yourself.

I have my current ESXi 4.1 host that's a Dell Precision Workstation T7400 with a PERC 6/i RAID controller, two pairs of SAS drives (146GB 15k rpm mirrored pair for ESXi, 1TB 7200rpm mirrored for the datastore), 16GB of RAM (half the slots are populated, so I can easily add another 16GB), two Intel Gb, server, NIC's (one dual port, the other quad port, not using the onboard Broadcom since it won't support jumbo frames) with a dvd-rom drive... The video card is the lowest I could get in the T7400 (PCIe card) since it doesn't need to do a hell of a lot... It is a nVidia card, which is also my preference for all systems (I actually won't purchase a system with an ATI GPU in it), followed by the Intel GPU's... This system has been running 24x7 for almost three years now... I've sucessfully gone from ESXi 3.5, through it's updates, to 4.0 u1 and u2 and now up to ESXi 4.1 without any trouble at all. My next host might boot from an USB flash drive (SanDisk 2-8GB Cruzer model) or just a single hard drive inside. That's due to the plans to have an iSCSI SAN in place before I get the second host server. Since I'll be moving all my VM's to the SAN, I won't need any real storage on the host... Plus, the additional RAID level, and multiple spindles (will be going with a chassis that supports at least six drives to start) will give better performance and redundancy on the SAN... Having a pSwitch that supports jumbo frames, flow control, and VLAN's means I'll be able to set that up right.

IF you do go with a whitebox build for the host, just be aware (and prepared for) that you could be spending a lot of time trying to resolve issues with hardware not working either at all, properly, or fully with ESX/ESXi... I had the T7400 up and running in under an hour from when I booted it from the ESXi install disc... I know people that have tried to go the whitebox route and have spent months trying to resolve issues (all traced back to hardware selections). If you're time isn't worth anything, then go ahead. Personally, I'd rather not fight a ghetto configuration (or whitebox) to get ESX/ESXi running... I have plenty of other things I'd rather spend my time doing...

VMware VCP4

Consider awarding points for "helpful" and/or "correct" answers.

Reply
0 Kudos
asatoran
Immortal
Immortal

You'll likely find that the built-in RAID and NIC won't work with ESX for most non-server-grade motherboards. And even many server-grade motherboards, the RAID is not compatible. (ESX won't work with software RAID, which is what most built-in RAIDs are.) So I would focus the concern on the built-in NIC. The desktop motherboards usually use Realtek, which won't work with ESX. So again, server-grade. In the past, I've had success with ESX 4.0 on Intel brand motherboards, since they used Intel NICs onboard. But ESX 4.1 fails to install on some of these without some workarounds. So in the end, you may just have to resign yourself to ignore the onboard stuff and get separate cards for RAID and NIC.

What I did for my home rig was to ignore the RAID for local storage and instead use a NAS for storage. ESX boots from a USB stick and the datastores are connected over iSCSI and NFS to other boxes. I was using a older box with Openfiler, but am currently using a Windows 2003 whitebox. The Windows machine allows me to reuse so software RAID cards I had. Performance is sufficient or my needs but YMMV, of course. The ESX 4.1 whitebox is a Intel DP35DP motherboard. The-onboard NIC does work, along with a PCIe Intel Pro 1000 NIC. (ESX 4.1 installer fails on this system, so I had to use a different system to create the USB stick.) But otherwise, I "saved" on the RAID card by using and external instead of an internal datastore.

Reply
0 Kudos
Adrenaline999
Contributor
Contributor

I'm a total newb when it comes to esxi but i was able to get it working and recover all of my Symantec backup exec system recovery images to the server.

i bought an EVGA p55 FTW motherboard, i7-875k cpu, 16 gigs of Ram and a few tb Hard drives.. After initially trying to install ESXI and it failed i bought a cheap intel nic for 40$ and stuck it in. The install went flawlessly after that and i was up and running.

Now all i need is a Raid Card so i can create a massive data store.. At the moment i'm using the local sata drive and a nas to store my machines.

Its too bad ESXI doesn't allow the software raid (ich8r etc.) to be used with ESXi.

Hyper-v r2 2008 works well on this rig and supports the onboard raid controller (raid 0,1,5,10). But for restoring backs to a VM ESXi just seems to be easier.

Reply
0 Kudos