Need to know if any one tried installing ESXi 5 on AMD FX 8120/8150 processors ?
I mean are these processors supported ?
Had a check on hcl but those processors arent listed over there.
I am planning for a test lab, hence wanted to know if we can create virtual machine and install esxi 5 on a computer that would have AMD FX 8120/8150 as its processor.
Any suggestions are welcomed
please chec the below white box list unofficial supported.
For vmware officially supported hardware check in compatability matrix.
Well I am aware of the vmware and whitebox hcl!
As mentioned in my post, I had a check on those but couldnt find AMD FX 8120/8150 listed there.
I am curious to know if someone tried running esxi 5 on those processors and if they did this successfully, then what were their hardware specs ?
FX for a LAB environment work well for me! but....not recommended for a production environment of course.
Currently have ESXi 5 running on AMD FX 6100 and has performed flawlessly (w/ *FX motherboard that supports IOMMU for Direct Path I/O). Hosting a LAB for a legacy network of 8 VM's of various OS and Services: (Novell, WinXP, Win7Pro, Win2k8r2 w/ MSSQL, Exchange, AD,)
I don't expect you will see the AMD FX series on HCL as this is not a Server Processor.
Others have built with AMD FX 8150 when reading the reviews on NEWEGG for product AMD FX 8150. Not sure of the specific MICRO code differences between AMD FX 8150 and 8120 as I could not locate a build with the FX-8120.
Suggest reviewing the work of others over at www dot vm-help dot com.
http://www.vm-help.com/forum/viewtopic.php?f=27&t=3573&p=13719&hilit=AMD#p13719 - FX-6100 Good MB w/ PassThrough!
http://www.vm-help.com/forum/viewtopic.php?f=27&t=3459&p=13248&hilit=AMD#p13248 - FX-8150 but this Build's MB does not support PassThrough!
Glad to know that FX series work for a LAB, as I would be using the bulid for Lab environment.
But may I request you to elaborate, if you had installed esxi on the pysical hardware directly or installed inside a VM on a pysical machine that had FX ?
If FX dosent work's well if installed directly on the pysical hardware that has AMD FX 8150/8120 CPU then,
Would it work if we Install esxi as a Virtual machine using VMware workstation on a pysical machine that has AMD FX 8150/8120 CPU ?
I guess that shouldnt be an issue, not sure though!! :smileyconfused:
My recent ESXi 5.0 whitebox build.
I recently had a need for a virtual lab. My old one did not have enough juice to run enough 2008R2 servers for my purposes. I chose the following components and added them to an old case with a DVD drive. All required functionality works. Total cost was about $550.
GIGABYTE GA-970A-D3 AM3+ AMD 970 SATA 6Gb/s USB 3.0 ATX AMD Motherboard $89.99 This is a fairly current board, and will support 32GB of memory. NIC works, as does the SATA controller. Have not tested the Raid functionality (should support RAID 0, 5; 10) Not sure if direct path IO is supported, but at 1Gb/sec the NIC is good enough for a small lab. A slow hard drive is likely faster than a gigabit network & SAN in most use cases.
Seagate Barracuda ST1000DM003 1TB 7200 RPM SATA 6.0Gb/s 3.5" Internal Hard Drive -Bare Drive $99.99 I actually have not put this in, as it arrived DOA. I used an old ~200GB drive I had around. Any sufficiently large drive will do. Tested old drive with thin provisioning; got six 2008R2 boxes up. (Windows Roles and Features only; 3 domains; DNS, DHCP: No waiting.)
AMD FX-6100 Zambezi 3.3GHz Socket AM3+ 95W Six-Core Desktop Processor FD6100WMGUSBX $149.99 This was a cost and heat choice. A eight core processor can be used. This one supports AMD-V + IOMMU.
SAPPHIRE Radeon HD 5450 1GB 64-bit DDR3 PCI Express 2.1 x16 HDCP Ready Low Profile Ready Video Card ( 100292DDR3L) $39.99 This was just a relatively cheap video card. I figured if the ESXi thing did not work, I could make a media center from the box. It was listed by someone else as compatible: A bit hot for me, but whatever.
CORSAIR Vengeance 16GB (2 x 8GB) 240-Pin DDR3 SDRAM DDR3 1600 (PC3 12800) Desktop Memory Model CMZ16GX3M2A1600C10 $119.99 Just the cheapest compatible memory from a manufacturer I've heard of. Using 8GB sticks gives the option to upgrade to 32GB later by adding another two 8GB sticks. (4GB is recommended for each 2k8 instance, although under 2 is generally used. Not sure how far shared memory pages will allow for over committment.)
Additional items: PSU: purchased a generic @ the yellow box store (~600W) ~$60). Old case: $0 Old IDE DVD drive: $0 (The MB supports USB boot) A small GB switch: not really part of the build, but you will want a switch of some kind...
Notes: 1) Everything tested works. I had concerns about the NIC, and SATA controller - but they are both fine. 2) The hard disk specified is probably overkill. Running the box with an ancient 200GB drive works fine for me, so speed is not that much of an issue. I just went with 7200RPM as it's as fast as you get cheaply. I have run 15 production Citrix servers during peak hours over a 1 Gb/s link, so you can use a SAN if you want. Or you may wish to just use multiple junk drives (possibly as RAID 1, 10, or jbod). 3) The hottest thing seems to be the Video card.
I am looking to buy this processor and FX chipset for a home lab. My key concern is being able to run nested VM's. In another words, ESXi running ESXi running 64bit VM. Has anyone tried this yet with this setup?
I tried installing esxi directly on the hardware that had FX processor, unfortuantely it failed to load for me. Guess, it requires the hardware listed in hcl.
Though creating esxi vm inside VMware workstation on a FX hardware works fine for me.
Thanks Chintan, I appreciate you trying this. From my research it appears there is a bug on the FX/Bulldozer CPUs that prevent nested virtualization anyway, found a post on there from a few weeks ago confirming this. Looks like I will have to go with an Intel chipset/processor.
I am running a ESXi5 server on a AMD FX FX-8120 with a BIoStar TA990FXE motherboard and 32 gigs of memory. I had no problems installing ESXi 5 on this rig. The only thing that doesn't work is the integrated NIC. I currently have 7 VMs that run 24 hours a day and 5 that are turned on as needed. This machine far out performs other ESX servers I have built in the past.
I read this thread before trying it out myself and it turns out that a lot of the information on here is wrong.
My Test Lab Specs:
|1 x||Gigabyte GA-78LMT-USB3 Socket AM3+ AMD 760G + SB710 Chipset ATI Radeon HD3000 Graphics Dual-Channel DDR3 1333/1066/800 Mhz 7.1- Channel HD Audio Gigabit LAN 6xSATA 3Gb/s 8x USB 2.0 4x USB 3.0 DVI/VGA/HDMI Micro ATX||$64.99||$64.99|
|1 x||AMD Bulldozer X8 FX-8120 (125W) Eight Core Socket AM3+, 3.1GHz, 8Mb Cache, 32nm (FD8120FRGUBOX)||$154.99||$154.99|
|2 x||G.SKILL Ripjaws X Series 16GB (2x8GB) DDR3 1333MHz CL9 Dual Channel Kit (F3-10666CL9D-16GBXL)||$69.99||$139.98|
|1 x||Antec VP450 450W Continuous Power Supply||$32.99||$32.99|
I than threw in 2 vertex 4's and 2 WD Black 1tb HD's I had lying around.
I'm currently running:
-4 Server 2008 R2
-3 Win 7s pro
-2 ubuntu 12.04 server
I also tried a nested vmware install in server 2008 R2 and it worked just fine.
I thought I would share this with you guys since we all want a powerful test lab environment while spending the least amount of money.
Yes full nested virtualization works fine at this time. Under 5.1 its even better as long as your board supports full virtualization feature sets. Been working well in my lab also.
I'm running ESXi on an FX-8120 and an ASRock 970 Extreme 3 motherboard with 12 GB of RAM. It's been running 24x7 for the past month or so without any issues to speak of. The motherboard is supposed to support directpath i/o with the change of a BIOS setting, but I haven't tested that yet, as I don't have any need for it in my lab environment. I'm also running a second ESXi server on an AMD X2 250 with 16 GB of RAM. It's been running even longer without any issues (I believe it's an AMD 785G motherboard made by Gigabyte, but I can't remember the specific model). Both support the motherboard's gigabit LAN out of the box on 5.0 and 5.1, and they've been working with my NFS server without any hiccups.
I'm impressed by ESXi's stability on non-supported hardware. So many other enterprise software suites I've worked with are so narrowly designed for the HCL that they wind up being at best unstable on unofficial hardware and at worst completely non-functional. Makes my job easier, that's for sure.