vintagedon's Posts

Good Evening: So I am trying to determine what are the bare minimum firewall ports for ESXi6/vCenter6 that I must leave open incoming from the Net for management? Both my ESXi servers and m... See more...
Good Evening: So I am trying to determine what are the bare minimum firewall ports for ESXi6/vCenter6 that I must leave open incoming from the Net for management? Both my ESXi servers and my vCenter are behind a firewall, with intra-zone communication open between them, and open communication OUT to the web.  So there are no issues communicating between vCenter and the ESXi servers.  So on a management IP, is there any reason to leave any ports open to the world incoming?  DNS and NTP will be outgoing, so I can't really think of any.  But before I lock 'em down for good, I'm asking just in case someone knows something I don't, or in case I'm geeking out and missing something obvious. For management, I'll be using a VPN connection to connect to the web client, and do any management. As Always, Don
SJNobles, I've done both: passing through the individual devices, and passing through an entire USB card to the machine, the latter of which I've found more reliable.  With the XBMC VMs, I've ... See more...
SJNobles, I've done both: passing through the individual devices, and passing through an entire USB card to the machine, the latter of which I've found more reliable.  With the XBMC VMs, I've had some issues with wireless keyboards going offline, most likely from them sleeping, and ESXi disconnecting them (this is only conjecture, I haven't investigated much further). My go-to motherboard has been the ASRock 970 Extreme3 or the Gigabyte GA-970A-UD3.  You can see my lab, and full information on my builds at http://thehomeserverblog.com
What I'm doing is using USB over CAT5 adapters (keyboard/mouse don't need to be USB 2.0, so you can get away with the cheaper adapters).  So, for example, I pass my video over CAT6 using a HDMI--... See more...
What I'm doing is using USB over CAT5 adapters (keyboard/mouse don't need to be USB 2.0, so you can get away with the cheaper adapters).  So, for example, I pass my video over CAT6 using a HDMI-->CAT6 adapter, and then I've run another line of CAT6 that I pass USB over and terminate it in a powered USB hub.  Off that hub, I can run the mouse, keyboard, USB drives, and whatever else I need.  At the moment, I have 4 VMs running like this: my son's computer (full on gaming, and I also run a second monitor dropping displayport to VGA and then passing the VGA over CAT6), my wife's computer (general use and light gaming), and two XBMC VMs.  It takes a minimum of 2 nodes to do this, as you need a PCI-e slot, obviously, for each video card. Currently, I'm running 5.0 as there are some proven issues with passthrough with 5.1.  As a test, I made a clone of my USB stick for a node and then upgraded that node to 5.1, and the VMs using passthrough on that node no longer worked.  Just plugged in the clone of the USB stick and went back to normal. This month has seen some fairly large changes to the lab (just added a homemade standalone 32TB SAN for the lab, for example), which I'll be updating the site soon (been SO busy), but this basic setup has remained the same.  I'm sure there are other ways to do it, but I've found this works wonderfully for my needs.
I'm successfully passing through a 7970, a low profile 6670, a 7950, and 7850.  I've also used a couple of 5xxx series cards.  My son's VM uses the 7970 and he games on two 24" monitors passing H... See more...
I'm successfully passing through a 7970, a low profile 6670, a 7950, and 7850.  I've also used a couple of 5xxx series cards.  My son's VM uses the 7970 and he games on two 24" monitors passing HDMI over CAT6 and then displayport to VGA over CAT6.  Works perfectly.
Actually, I can do better than that.  My complete builds and information on my setup are on my blog dedicated to this at http://thehomeserverblog.com I'm always open to questions, and the comm... See more...
Actually, I can do better than that.  My complete builds and information on my setup are on my blog dedicated to this at http://thehomeserverblog.com I'm always open to questions, and the comments sections in the builds have some great input from others, too.
My apologies, that's a hasty mistype (quick interjection into the conversation from work).  That's my son's old desktop card; I'm running an HD7850 in the VM, and low profile HD6670s in the XBMC ... See more...
My apologies, that's a hasty mistype (quick interjection into the conversation from work).  That's my son's old desktop card; I'm running an HD7850 in the VM, and low profile HD6670s in the XBMC VMs.
Agree completely with rmathis about 5.0 vs. 5.1 for this particular purpose.  USB passthrough on 5.1 is currently broken, or extremely unreliable, and I can attest to that after testing 5.1 on on... See more...
Agree completely with rmathis about 5.0 vs. 5.1 for this particular purpose.  USB passthrough on 5.1 is currently broken, or extremely unreliable, and I can attest to that after testing 5.1 on one of my nodes.  That, our of your specs you listed, is the only potential hardware/non-hardware issue I see. My lab is pretty extensive (for a home lab), and I run three XBMC VMs plus two VMs that I use for desktops (my wife and son) and pass through a total of 5 video cards to these VMs as well as running HDMI and USB over CAT6 for all 5 VMs.  My son's VM has a GTX560 passed through to it and plays COD Black Ops, and other demanding games w/o issue.  The HDMI sound I haven't had any specific issues with other than driver related items.
@derickso ... appreciate the complliments.  The blog is just me trying to be helpful.  This is very long thread, and trying to find solid information for building ESXi nodes with consumer grade h... See more...
@derickso ... appreciate the complliments.  The blog is just me trying to be helpful.  This is very long thread, and trying to find solid information for building ESXi nodes with consumer grade hardware that's (a) cost effective in RAM/Cores to money ratios, and (b) verified compatibility is just hard.  So, I just thought I would share vetted builds that I had done.  For a 8 core, 32GB RAM box for $400 (that's not including the cost of your case), you just can't beat that. Anyway, I'm almost done with my post on the video card work, and have had quite a few tips from traffic on stuff they are doing, so I'll incorporate that.  I've been sick this week, so I didn't have time to post it.  Expect it this coming weekend, plus some additional hardware I've found that works, including two new AMD motherboards.
I've been successfully doing this for a while now, and I did try a 5450, but found it to have choppy performance, similar to what you're having.  So, IMO, it's your card, not specifically your bu... See more...
I've been successfully doing this for a while now, and I did try a 5450, but found it to have choppy performance, similar to what you're having.  So, IMO, it's your card, not specifically your build.  You could always keep that for your default video for your console (I'm using simple 8MB ATI Rage cards for my console graphics) The two cards I've successfully done HTPCs in a virtual machine with are the HD6450 and HIS Radeon HD6670 PCI-e x16 Low-Profile.  The latter card I used to pass through to a VM for my son to use as his daily desktop (in fact, I've virtualized all but a single physical PC in the house) and he does gaming on it at very respectable frame rates. You can see more of my builds and experiences at thehomeserverblog.com
So I've been following this thread as a lurker for a couple of weeks, and just wanted to share my specific success story.  My eventual goal was to build a ESXi cluster that was not only a lab for... See more...
So I've been following this thread as a lurker for a couple of weeks, and just wanted to share my specific success story.  My eventual goal was to build a ESXi cluster that was not only a lab for work (I'm a Windows/Linux Systems admin for an enterprise hosting company), but also allowed me to virtualize some of my home servers and HTPCs.  To do this cost-effectively, I went with all AMD hardware (a mix of server and desktop boards), and ended up with a 3 node, 32 core, 128GB ram cluster. You can also see a more detailed version of this with pictures of the build and screenshots of the ESXi screens at: http://thehomeserverblog.com/esxi/esxi-5-0-amd-whitebox-server-for-500-with-passthrough-iommu-build-2/  I'm also compiling a list of vetted builds for ESXi whiteboxes, and I'd love to have anyone add any specific builds there that they have running. The most success I have had is with a whitebox I built from consumer items.  Considering that it took a bunch of research and some failures along with it, I thought I would post my list and specific configuration in case someone wants to duplicate or learn from this.  Although I did have some headaches here and there, over and above, this has been a painless build.  Note that I'm running 4 GB NICs in every node simply because this is a lab, too, and I've got all my traffic properly segregated, but they are not at all necessary in the long run. With one of the nodes, I experimented with passthrough, and got not only a domain controller running with 8 2TB drives passed through to it (4 from the mobo, 4 from a RAID card), but a working HTPC with video card that functions as my primary XBMC HTPC and gaming center for the living room (Steam, MAME, Dolphin, etc).  Passed through a video card and USB ports, and ran a USB-->CAT6 to a powered USB hub in the living room.  All of this is in a 2U case in a custom-built home server rack.  It's all working stable, of course with a few limitations. My hardware list ended up like this: Motherboard: ASRock 970 Extreme3 CPU: AMD FX-8120 Zambezi 3.1GHz Socket AM3+ 125W Eight-Core RAM: 32GB (4x8GB) DDR3-1333 The slot configuration on the mobo looks like this: PCI-e x16: Radeon HD6670 (Passthrough to VM) PCI-e x4 : LSI SAS3041E 4-Port SAS/SATA PCI-e x4 (Passthrough to VM) PCI-e x1 : GB NIC (RealTek 8168, used by ESXi host) PCI-e x1 : GB NIC (RealTek 8168, used by ESXi host) PCI : GB NIC (RealTek 8169, used by ESXi host) PCI : ATI Rage XL Pro 8BM PCI Video Card (Console Video) Drives: Interestingly enough, if you passthrough the on-board SATA controller on this board (there are 5 SATA ports), the 5th port actually stays available for use to the ESXi host. This is nice, because, as you know, VMs with passthroughs are not available for VMotion anyway.  This allowed me to install ESXi to a hard drive, and have a local datastore for the HTPC, which wasn't going anywhere anyway.  This freed up the other USB ports for passthrough if I wanted them. "Local" Drive as Datastore: 1TB Hitachi Ultrastar "NAS" Drives Passed to VM: 8 x 2TB WD Green Drives HOMESERVER The first of two "passthrough" VMs in this setup is my domain controller/game server/NAS.  It's running SBS 2011 Essentials, and has the motherboard SATA controller passed through as well as the LSI card for a total of 8 x 2TB green drives.  This worked flawlessly, and required not a bit of configuration.  The SATA contoller and LSI controller "just worked".  Assigned them, booted up, Windows installed the hardware, and it was off and running.  Used FlexRAID to software RAID these drives into a single ~12.75TB drive that I keep my media on (movies, TV, music), profiles for the house accounts, and serves Windows shares out for various folders. In addition, it runs a in-house WoW server and Minecraft server. HTPC The second and final "passthrough" VM on this node is the primary HTPC for the house.  This has Windows7 Ultimate 32bit installed and runs XBMC, Steam (w/~200 games), MAME, Dolphin Emulator, and a small host of other games and emulators.  The HD6670 showed up as two cards (one dependent on the other), so both are passed through.  The second is the HDMI sound card.  Had some initial flakiness with the HDMI sound, but after two reboots once I installed the drivers, this seemed to disappear.  Video/sound runs over a 50' shielded HDMI cable to my TV in the living room.  Once I had video on the TV, I selected it as my primary device and completely disabled the "other" display, which is the console. USB also works, which I'm running USB over CAT6 (with an adapter) where I hooked up a wireless HTPC keyboard, Xbox 360 Wireless Controller PC Adapter, bluetooth adapter for my WiiMotes (Dolphin emulator), my HTPC remote, and so on.  No issues here that I've noticed either.  Hardware acceleration, according to XBMC is working, and I can watch 1080p YouTube videos without issue. Thoughts About the Build RAM: I have not been able to get above 2GB of RAM on the HTPC VM and remain stable, but I haven't had much incentive.  The 2GB works for my particular application.  That said, my next project is to virtualize a work computer running 3 monitors using this same scheme, and I *will* need more RAM with it.  Thus, I'm going to be pushing the limits there to see what I can do. USB: I realize my application is pretty specific, but I'm running USB --> CAT6 (my whole house is wired with shielded CAT6A) to a powered USB hub, and this works wonderfully.  I run about 50' and get 10 USB ports at the end.  Nothing I've plugged in has failed or given me any issue. Sound: HDMI sound was pretty flaky for a while, and I had almost gave up when it started, out of the blue, working.  However, I was using a USB 7.1 surround sound card that was working perfectly over USB and pumping sound to my home theatre sound system. Cost of the Whitebox: Total w/deals off eBay was $530.  The LSI card was $15, video card was $8, GB NICs were $6, RAM was $120 and so on.  I consider this a great deal for a 8 core, 32GB ESXi node.  I have two of these, and a ASUS KGPE-D16 running dual 6128s (16 cores total) w/64GB of RAM.  Other equipment for the lab includes two 2-bay NAS boxes delivering iSCSI targets for high availability, 2x24 port Gigabit Smart switches, a Juniper SSG5, and a 3000VA rack mount UPS.  Total lab cost was just under $2,000. Next project: With the success of the passthroughs, I'm moving on to virtualizing the PCs in the house, and the other HTPC.  My eventual goal is to have all but a single PC in the house virtualized, all running on passthrough video and USB.  I'll continue to share anything I learn as I move forward.