bigideaguys
Contributor
Contributor

Training Lab - a different take - lots of individual questions

Jump to solution

I know this question has been asked ad-nauseum in various ways I'm sure, but what I need more of a general answer to it.I can dig up white box specs on various web sites. That's not what I'm asking. My actual question will follow more of my background first to help this all make more sense. Please feel free to rip apart any of my assumptions below, I don't mind citiciscm if it's constructive!

What I want to set up is a real physical (as opposed to virtual) lab. Please no "You can run ESXi on Workstation 8" type responses, I've moved well beyond that... I know enough now to be dangerous lol!

What I've never seen is any kind of answer to setup relating to a bare minimum HIGH AVAILABILITY / FAULT TOLERANT hardware only lab (which can scale out) that is specifically for learning all the features vSpehere/ESXi 5 offers... vMotion being a big one. I want something where I can pull the plug on one box to test if I've developed something that can handle hardware failures. I also want to break out the storage and learn iSCSI on the (affordable) hardware level.

Right now, so you know where I'm coming from, I'm running the following for a lab:

1. 1 physical box that is a dedicated Windows 2008 R2 AD Server. A typical 4GB desktop generic box. This is always static so I can tear down the other elements of my lab and re-build immediately. IP: 192.168.1.5

2. 1 generic whitebox that is an all-in-one server and all parts are on the VMWare HCL:

  •      32GB of Ram
  •      2 - 8Core AMD processors
  •     Supermicro motherboard
  •      2 TB of internal drives
  •      4 Nic's (though I have issues if I use more than 2, not sure why? They constantly would start/stop under ESXi4.1)
  •      I set one NIC for the VM's another for management (though with this setup i guess that's pointless)
  •      I set this box to IP: 192.168.1.2 and joined it to the domain

This whitebox runs the following Virtual Servers:

  •      A second Win 2008 R2 AD Server IP: 192.168.1.6
  •      A Windows 2008 R2 vCenter Server IP: 192.168.1.7
  •      At any given time 2-10 various Windows 2008 R2 or linux Servers for learning other software

What I'd like to do is move away from the single monolythic ESXi box, and get rid of my external primary AD server for the lab. I've got money I can spend (to a point), so this is what I was thinking I need:

  • 3 identical physical boxes that can each run ESXi 5 hypervisor. (Amount of VM's I want to run aside...) how many NIC's should each box have? I'm assuming 1 is the minimum, but there will be literally no actual traffic, so I don't need to team nics for traffic only purposes, or deal with "cable failure" issues in regards to HA or FT. But having dedicated NIC's for learning purposes to separate out the management network, vs the VM networks vs the VMotion network is something else, as it relates to configuration learning aspects.

  • I'd also like to set up the entire domain in VM's so I'll need resources to handle the following (evenly spread out 3-4 VM's per box):
    • 2 Win2008 R2 AD Servers that will replicate DNS and DHCP info
    • 1 Win2008 R2 Server dedicated to vCenter
    • 1 Win2008 R2 Web/SQL server
    • 2-6 Additional Win/Linux servers
    • Assume each VM server is setup expecting 2 cores  and 4GB of ram

I know with minimal traffic and how vSphere can share resources I don't need full specs for the RAM/CPU, but I'd like the system to stay 100% live (albeit I know slow) if one box fails and the the VM's on that box having to migrate to the other 2 and strain resources. I hope this makes sense? So would each whitebox having 16GB of RAM and an Intel or AMD 4 core CPU be adequate? Or would a 4 core CPU not be enough to handle 4 concurrent VM's, let alone more and not bog down to unusable? What if 2 boxes failed? Could one of these boxes still manage all this (Yes, I know it would be dog slow, but could it?)

I'm also looking into something like a Buffalo Technology iSCSI NAS or a whitebox self built OpenFiler iSCSI NAS. I'd like it to be small but fast, as in 6-128GB SSD drives in a raid 5 setup maybe? How many NIC's? Just 2? 4?

I currently have a 16 port Gigabit unmanaged switch everything is running through. Should I replace this with a managed Cisco switch? more than 1? how many ports?

I know this is a ton to ask in one post, so I hope the general over-arcing question is apparent!

Thank you for any help and advice!

~Michael

0 Kudos
1 Solution

Accepted Solutions
Datto
Expert
Expert

One other note about using rack servers for a home lab vs using desktop boxes -- the rack servers are noisy. It doesn't bother me (I guess I've spent too much time in datacenters) but for some the noise level would be too much.

Datto

View solution in original post

0 Kudos
16 Replies
Datto
Expert
Expert

Just my opinion...

You wouldn't have any problem running those VMs on the three ESXi hosts that you describe when running a lab environment.

Two ESXi boxes could run those VMs (at a slower speed) if the third one failed (primarily because you're using SSDs as the storage).

One box might be able to run all of that very, very slowly (again because of the SSDs used as storage that you'd be swapping to) but I wouldn't plan on doing that and still being able to do much work on the VMs.

I'd put six gigabit NICs (Intel, preferably (not the CT line), or Broadcom) in each of the three ESXi servers. 2x NICs for storage multipathing. 2x NICs for VMotion/Management Active/Standby of each other. 2x NICs for VM traffic. If you're going to use VMware FT, use one of the VM traffic VMs dedicated to VMware FT only and the other one for VM traffic only. You could also team the VMware FT NIC with the Management/VMotion team to provide more redundancy (note that FT will make your VMs eager zeroed thick so the VMDKs will be sizeable rather than thin provisioned on your SSDs). Also note that your FT-capable VMs would need to be scaled down to a single vCPU since VMware FT, at the present time, only works with single vCPUs. Also with FT, you're going to be using two times the amount of memory for FT engaged VMs (one memory amount for the Primary and one memory amount for the Secondary VM) but at least you could get hands-on lab experience running VMware FT.

I'd get a 24 port switch that has VLAN capability and Jumbo Frame capability.

Also, I'd make sure your CPUs have SLAT capability (EPT for Intel, RVI/NPT for AMD). This would allow you much flexibility in the future as the hypervisor technology changes and utilizes SLAT. i7 Intel CPUs have SLAT and AMD 23xx/83xx series Opterons with C2 stepping have SLAT that works with ESXi.

If you haven't already realized, memory will be much more important than most anything else hardware wise in a VMware VSphere environment so buy the biggest DIMMs as possible and get ESXi hosts with as many DIMM slots as possible so you can later bulk in more memory into each ESXi host as your lab grows.

Datto

PS: If it was me, I'd just buy affordable rack servers on eBay rather than trying to outfit white box servers.

Message was edited by: Datto

bigideaguys
Contributor
Contributor

Thank you so much for the advice! My only concern re: buying off e-bay is the cost of a pre configured server that can take more than 4GB of RAM. This is what I spec'd out at my local Computer Superstore (Fry's). Do you think it's adequate?

--------------------------

ASUS F1A75-M Pro Motherboard                          $  89.99
AMD A6-3650 2.6GHz FM1 Quad Core              $109.99
32GB 1866 Memory                                                  $ 399.99
Broadway Case 1243MA-Black w/500W            $  34.99
8GB USB 3.0 Patriot Thumb drives                $  13.99
                                                                
About $650 per "server" new x 3 =                $1950.00
128GB SSD Drives     $ 130 x 6                    $780
Do you think I can find something comparable on E-Bay? I also like the fact I can repurpose the boxes into other kinds of server, workstations, desktops, or whatever. Any thoughts on the hardware I chose? Obviously, that doesn't include the NIC's either I just realized.
Last question... If I'm running most of my VM's as a 2core/4GB Ram configuration when setting them up, what would you allocate as far as physical RAM? Is there some type of calculator out there for this purpose that takes into account your actual physical RAM/CPU specs?
Any suggestions on a make/model inexpensive switch?
Thank you again Datto!
~Michael
0 Kudos
Datto
Expert
Expert

As far as the physical ram utilized by each VM running on the ESXi hosts, that will depend upon what is happening inside the VMs as far as what you can expect out of the ESXi host for Transparent Page Sharing memory saving and what the VMTools can do as far as ballooning memory savings. For instance, your VCenter VM will likely be close to utilizing the entire 4GB of physical ESXi memory that is assigned to the VCenter whereas the domain controller may not be utilizing even half that amount.

Suggestion -- from the outset put your MSSQL database server for VCenter on a seperate VM so you can better utilize VMware View Composer during your training in the future. Then use that MSSQL database server for all your database needs (not just VCenter).

Putting 32GB of system memory into your ESXi hosts that you show in your list from Fry's is a good idea if you can afford to do so from the outset. Unfortunately, I don't know anything about the motherboard you've chosen or whether that motherboard will even work with ESXi 5.0 but here are some websites that may be able to help you with motherboard and ESXi compatibility:

http://www.vm-help.com//esx40i/esx40_whitebox_HCL.php

http://ultimatewhitebox.com/

Also, you'd need a 4th server for the SSD drive shared storage if you don't already have one available.

With Openfiler, I'd read up on whether Openfiler can now do multipathing correctly with ESXi 5.0 -- some of the previous versions of Openfiler that I'd used in years past wouldn't work with multipathing coming from an ESXi server (I don't use Openfiler much in my home labs anymore -- switched to Fibre SANs at home a few years back).

As far as buying comparable rack servers on eBay, for the same price as your white box servers you'd likely be able to outfit a rack server bought on eBay with 32GB of system memory (also bought on eBay) and dual quad-core CPUs (also bought on eBay) if you stayed with AMD based rack servers and didn't put any hard drives in the rack servers (I'm using 1U Dell SC1435, 2U Dell 2970, 1U HP DL365 G1, 2U HP DL385 G2 and 4U HP DL585 G2 servers -- note the"G" numbers are very important for buying used HP servers). You'd have to be patient though since you wouldn't be able to predict when your eBay bid would be successful and you'd likely need to upgrade the BIOSes in those servsers to the latest to make them work properly. I have quad-core Opteron 23xx/83xx CPUs in each of my home servers (stay with C2 stepping or higher to avoid the AMD TLB bug) and each has 16, 32 or 64GB of system memory (note that I never put hard drives into home lab ESXi servers so I don't know much about local RAID controllers in those servers). I buy the rack servers seperately on eBay from the memory and CPUs that I buy on eBay -- it's been more economical for me to do so and allows me to upgrade as my budget allows.

Datto

Datto
Expert
Expert

One other note about using rack servers for a home lab vs using desktop boxes -- the rack servers are noisy. It doesn't bother me (I guess I've spent too much time in datacenters) but for some the noise level would be too much.

Datto

View solution in original post

0 Kudos
scottyyyc
Enthusiast
Enthusiast

Just a thought, but it might be easier overall to just get some used Dell (or HP) servers on ebay. There's tons of 1950s and 2950s to be had, and although they're 3-5 years old, they wouldn't be slow by any means. Plus, the slightly newer models (like a 1950 or 2950) would have SAS drives, meaning you can easily stick sata in them. The other advantage is they're right smack-dab on VMware's HCL, so you wouldn't be farting around with SuperMicro stuff. You know they'll work 100% right out of the gate. Honestly, I would personally avoid whitebox/supermicro stuff. But if it's just a home lab or test lab it's not the end of the world. If you want to spend a bit of $$ on newer servers, all the power to you. Less fuss and cheaper by just picking up some used Dells or HPs. Plus, even server RAM is pretty cheap (although not quite as cheap as desktop RAM), so even within your budget you could easily stuff a lot of RAM into them...

As far as NIC count goes, you generally want to split up the 3 main services - VM and Management Traffic, vMotion, and iSCSI - so 3 nics is a good place to start. You would double this if you want physical redundancy, but you say you don't, so 3 is fine.

Don't even worry about making your 08r2 DC physical. Why not take advantage of the powers of VM, and make it virtual as well? If your goal is to be able to clobber/re-set up the lab and storage, just take your DC in a gold state and export it as an OVF (to external storage). Then, if you need to start again, you can just re-import your OVF. Simple simple. This is what I do at home (have a used 1950 from work, and I just export my important VMs as OVF's, so if I ever want/need to screw around with things, I just re-import by OVF.

P.S. - For the record, I don't even think you can run ESXi within workstation Smiley Wink.

0 Kudos
Datto
Expert
Expert

For others reading this thread, another option would be to buy a Lenovo W510 or W520 laptop with an i7 quad-core CPU and put 16GB or 32GB of system memory into the laptop as well as an SSD drive. Then load VMware Workstation 8.x on it, install some nested ESXi 5.0 hosts as VMs, then on those nested ESXi 5.0 VMs run 64 bit VMs. You'd need a SLAT capable CPU, at least 16GB of system memory and an SSD drive to make that work well enough to be usable all the time as your main lab. That option also saves much money for electricity and provides a portable option if you're someone who doesn't spend much time at home.

Datto

0 Kudos
scottyyyc
Enthusiast
Enthusiast

I'm under the impression you can't even run ESXi within Workstation... IF it's possible, this is another option as well. Although OP specifically said he didn't want to do this... And for testing HA capabilities, I can't see this being an ideal setup...

0 Kudos
Datto
Expert
Expert

No problem running ESXi 5.0 as a VM under VMware Workstation 8.x and then running 64 bit VMs running on those nested ESXi 5.0 VMs (requires SLAT capable CPU for 64 bit nested VMs running on nested ESXi 5.0 hosts -- performance, of course, is less than 50% of what you'd get running the ESXi 5.0 on a physical box but if you run the nested VMs on SSDs it makes up for some of the performance drop). You can also run most of the released versions of other hypervisors as VMs running on SLAT capable CPUs under Workstation 8.x and have them run 64 bit VMs also (but not RHEL for some reason).

Datto

0 Kudos
Datto
Expert
Expert

Here's a link to how to run ESXi 5.0 under Workstation 8.x and other hypervisors also:

http://www.vcritical.com/2011/07/vmware-vsphere-can-virtualize-itself/

Here's VMware's notes on running nested ESXi hosts:

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=200991...

Datto

0 Kudos
Datto
Expert
Expert

Here's virtuallyGhetto's instructions for nesting ESXi 5.0:

http://www.virtuallyghetto.com/2011/07/how-to-enable-support-for-nested-64bit.html

Here's virtuallyGhetto's instructions on how to enable virtual FT:

http://www.virtuallyghetto.com/2011/07/how-to-enable-nested-vft-virtual-fault.html

Datto

0 Kudos
bigideaguys
Contributor
Contributor

lol! You can say that again! Another reason I want really small quiet whiteboxes I can keep in my house in my office... the Supermicro Server I built sounds like an airplane and is so loud my wife made me keep it in the garage! It's been pretty rugged though... not a single minute of downtime in well over a year of running it through extreme cold 12 degrees F (which shouldn't be a problem anyways) to over a 100 degrees F daily summer. I was concerned at first but it never missed a beat!

0 Kudos
Datto
Expert
Expert

By the way, even with all those nesting options, I think the original poster's idea of running real physical servers is a much better solution for a home lab if the person is more of a home body and not someone who travels a lot for work and the person makes the kind of money that can support the initial costs and the on-going costs of housing physical ESXi 5.0 hosts.

Datto

0 Kudos
bigideaguys
Contributor
Contributor

scottyyyc: thank you for the additional suggestions! I haven;t played around with OVF's much, but like 100+ other things, just more learning to do!

Datto: I wasn't aware any laptops existed yet that would take 32GB of RAM? It's a compelling idea if I had cash to burn, but I don't think you can buy the laptops sans harddrives (though i could just yank and repurpose them in a fileserver lol!) and they'd be much quieter and power efficient, I still think it'd be double or more of the money to go the laptop route which is still whitebox. Plus you're paying for "monitors" and other compenents (like video/audio/etc) which are a waste. Interesting idea though would be to find a manufacturer who could "modify" (and I use that term VERY loosly!) and make laptop systems that DIDN'T have a monitor, but only a Video output and was built for "headless" server purposes, be it a home entertainment system, Fileserver or ESXi type box... Something along the lines of a server in a laptop case for SOHO use.

I was also copletely unaware of the SLAT technology? Do you have other links to more info? I shot down the nexted ESXi option because even on my 8 month old workstation (running an intel i7 and 16GB of RAM and dual raid SSD's) I tried this briefly when Workstation 8 came out and it denied the 64bit VM's nested in it. I was hoping it would work. maybe I can now just upgrade the processor?

The interesting concept now that you bring up SLAT, is I could have dual purpose desktop systems in my house now! I have 4 kids and a wife who all use computers. I could spec out high end workstations for them all to use, and run Workstation 8 on them all with nested VM's. As long as they had enough resources allocated to run a very basic deskop, that could create a hell of a test lab! Some very interesting thoughts! For that matter, I could also incorporate that concept into the media servers I use on 2 TV's... wow, I can just see it now, an 8 server ESXi farm that no one would even be aware is running! My family would be stoked too on how fast thier systems would be!

EDIT: Actually, an even BETTER idea would be when (not if) they finally come out with Laptops that CAN take 32/64/128GB of RAM is to give the family laptops instead of desktops! The only caveat to this would probably be the networking connections if they wanted to go wireless lol! Maybe they'll have to just stay plugged in, like it or not... hmmm that means they'd have to make a lap top that could incorporate or add on multiple NIC's... that could be an issue as well.

Message was edited by: bigideaguys

0 Kudos
Datto
Expert
Expert

By the way, just as an example of eBay pricing for use as a home lab (not production) -- I'm currently seeing an HP DL365 G1 for sale on eBay for a buy-it-now price of $99.00 + $40 shipping (10x are available). You'd update the BIOS, throw away the 2x dual core AMD CPUs that the server has in it and get a "matched pair" of AMD Opteron 23xx/83xx quad-core SLAT capable CPUs with C2 stepping (minimum stepping) at $30 per CPU delivered from eBay. Then you'd buy 8x 2GB PC2-5300P sticks of ECC memory for $90 delivered price or less from eBay and $10 for a USB stick to boot ESXi 5.0 from and for about $300 you have a rack server with dual quad-core CPUs, 16GB of memory that would likely run ESXi 5.0 (as a physical server) and would likely run nested ESXi 5.0 VMs running nested 64 bit VMs (as well as other flavors of hypervisor running nested, but not RHEL).

Of course, the buyer would need shared storage also since the ESXi 5.0 host wouldn't have any hard drives in it (but the example above does come with a RAID card if you ever decided to buy some hard drives).

Just an example -- a person would need to do their own investigation for compatibility and comfort level of performing upgrades to rack servers.

Datto

0 Kudos
Datto
Expert
Expert

Yeah, the Lenovo W510 laptop with i7 in it can take 16GB supported (32GB if you upgrade the BIOS and don't need support and can buy certain makes/models of DIMMs -- notes from others on the Internet on which DIMM models to buy) and the Lenovo W520 can take 32GB of system memory supported by Lenovo. A good option for someone who travels a lot and is interested in buying a fancy laptop to do the home VMware lab job as well as function as a regular laptop simultaneously. Note that both of those laptops run very hot in normal operation.

I've got some laptops around my place that occasionally run nested ESXi VMs that run VMs on those nested ESX hosts. The consoles of those laptops run financial trading charts for me and in the background the nested ESX hosts run VMs that are in their own laptop clusters within VCenter. It's fine for what I do but you'd have to accept a performance penalty for the nested VMs running on nested ESX hosts. Each of my laptops has a small SSD in it where the VSWP file resides for each VM (the main VMs reside on a seperate mechanical hard drive inside the laptops) -- I could put much larger SSDs in those laptops and have the entire VM reside on an SSD and get much better performance from the nested VMs if I needed that much more performance. For me I was just looking for additional economical screens to run the financial charts on and the multiple laptop idea has worked fine for me in that vein.

Datto

0 Kudos
Datto
Expert
Expert

Here's a Wikipedia page on SLAT:

http://en.wikipedia.org/wiki/Second_Level_Address_Translation

Here's a page with some summary info on SLAT cabable CPUs and other hypervisors:

https://social.technet.microsoft.com/wiki/contents/articles/1401.hyper-v-list-of-slat-capable-cpus-f...

Datto

0 Kudos