VMware Cloud Community
TopHatProductio
Hot Shot
Hot Shot

New Server Project

Hello! It's been a while since I last posted here with my own topic. I now have a dedicated ESXi server in the works. The server project is meant to replace (and exceed) my previous workstation - a Dell Precision T7500. Here are the specs for the hardware:

 

HPE ProLiant DL580 G7

 

 

    OS   :: VMware ESXi 6.5u3 Enterprise Plus
    CPU  :: 4x Intel Xeon E7-8870's (10c/20t each; 40c/80t total)
    RAM  :: 256GB (64x4GB) PC3-10600R DDR3-1333 ECC
    PCIe :: 1x HP 512843-001/591196-001 System I/O board + 
                1x HP 588137-B21; 591205-001/591204-001 PCIe Riser board
    GPU  :: 1x nVIDIA GeForce GTX Titan Xp +
                1x AMD FirePro S9300 x2 (2x "AMD Radeon Fury X's")
    SFX  :: 1x Creative Sound Blaster Audigy Rx
    NIC  :: 1x HPE NC524SFP (489892-B21) +
                2x Silicom PE310G4SPI9L-XR-CX3's
    STR  :: 1x HP Smart Array P410i Controller (integrated) +
                1x HGST HUSMM8040ASS200 MLC 400GB SSD (ESXi, vCenter Appliance, ISOs) + 
                4x HP 507127-B21 300GB HDDs (ESXi guest datastores) +
                1x Western Digital WD Blue 3D NAND 500GB SSD + 
                1x Intel 320 Series SSDSA2CW600G3 600GB SSD +
                1x Seagate Video ST500VT003 500GB HDD
    STR  :: 1x LSI SAS 9201-16e HBA SAS card +
                1x Mini-SAS SFF-8088 cable + 
                        1x Dell EMC KTN-STL3 (15x 3.5in HDD enclosure) + 
                                4x HITACHI Ultrastar HUH728080AL4205 8TB HDDs +
                                4x IBM Storewise XIV v7000 98Y3241 4TB HDDs
    I/O  :: 1x Inateck KU8212 (USB 3.2) +
                1x Logitech K845 (Cherry MX Blue) +
                1x Dell MS819 Wired Mouse
            1x Sonnet Allegro USB3-PRO-4P10-E (USB 3.X) +
                1x LG WH16NS40 BD-RE ODD
    PRP  :: 1x Samsung ViewFinity S70A UHD 32" (S32A700)
            1x Sony Optiarc BluRay drive
    PSU  :: 4x HP 1200W PSUs (441830-001/438203-001)

 

 


The details for the ProLiant DL380 Gen9 will appear here once data migration is complete. VMware Horizon (VDI) will have to wait for a future phase (if implemented at all). The current state of self-hosted VDI is Windows-centric, with second-hand support for Linux and no proper support for macOS.

The planned software/VM configurations have been moved back to the LTT post, and will be changing often for the foreseeable future.

Product links and details can be found here.

 

ESXi itself is usually run from a USB thumb drive, but I have a drive dedicated to it. No harm done. A small amount of thin provisioning/overbooking (RAM only) won’t hurt. macOS and Linux would have gotten a Radeon/FirePro (ie., Rx Vega 64), for best compatibility and stability, but market forces originally prevented this. Windows 10 gets the Audigy Rx and a Titan Xp. The macOS and Linux VMs get whatever audio the Titan Z FirePro S9300 x2 can provide. The whole purpose of Nextcloud is to phase out the use of Google Drive/Photos, iCloud, Box.com, and other externally-hosted cloud services (Mega can stay though).

 

There are three other mirrors for this project, in case you're interested in following individual conversations from the other sites (in addition to this thread).

 

P.S. Out of all the sites that I've ever used, this forum has one of the best WYSIWYG editors I've used in a while Smiley Happy

Kudos to the devs!

Tags (1)
259 Replies
TopHatProductio
Hot Shot
Hot Shot

Finally managed to get incoming calls working (albeit with meh audio quality) on FreePBX. Tested using MicroSIP for softphone. Still need to get outbound calls working. Used this tutorial to get everything configured properly. Once FreePBX is working as intended, it'll be time for YaCy Grid...

0 Kudos
TopHatProductio
Hot Shot
Hot Shot

Summary of recent changes thus far:

  • Finally found an easy DDNS solution.
  • Still troubleshooting that issue with FreePBX
  • Converted Windows Server, Win10, Artix to UEFI
  • Troubleshooting potential permissions issue in elastisearch (YaCy Grid)

Still have to convert/reinstall FreePBX to GPT/UEFI this weekend. After that, I can start working on the Bliss OS VM...

0 Kudos
TopHatProductio
Hot Shot
Hot Shot

I was supposed to troubleshoot the outbound calling issue on FreePBX this weekend, but ended up going out-of-town to a place where WiFi and cell reception were meh. I enjoyed myself and got to see a movie. When I got back (last night), I stayed up way past midnight to backup>reinstall>restore FreePBX on UEFI. Did not feel too hot at work today, but that's one less task left. Once outbound calling works, I need to port my Google Voice number over and work on configuring SMS. Then I'll be working on the YaCy Grid container. I'm considering putting Sunshine onto all GPU-equipped VMs in the near future. It'd be a nice alternative to RustDesk, until they finally introduce GPU acceleration. Still need to plan out the Bliss OS (Android) VM, and that could use a GPU (Rx 6700?). Should I move the G7 to ESXi 6.7, and have the Gen9 running vSphere 7?

I may have forgotten something(s) at this point, but gotta keep moving...

0 Kudos
TopHatProductio
Hot Shot
Hot Shot

Finally resolved the outbound calling issue. Now I'm focusing on an issue with background noise during calls. Once I get SMS working, I'll make the decision on whether to port my Google Voice number over to VoIP.ms. After that, I'll be working on the YaCy Grid container. While I would like to have Sunshine on all GPU-equipped VMs, I'm not sure how practical it'd be to implement (esp. seeing that I already have RustDesk). The Bliss OS (Android x86) VM will be coming later this year, and will be using a GPU (Rx 6700 XT). Once all VMs are ready, I'll move from ESXi 6.5u3 to 6.7u3. Still need to purchase EaseUS Backup Server licenses for my remaining devices (that have no current backups). Still haven't figured out how VDI will happen on the Gen9...

On a side note, I now wonder if the Linux version of Sunshine can be built to run on Android...

0 Kudos
TopHatProductio
Hot Shot
Hot Shot

Another day, another FreePBX issue to troubleshoot. This time, trying to configure SMS/MMS.

0 Kudos
TopHatProductio
Hot Shot
Hot Shot

Should I kick out the Radeon Pro v320, in favour of the Pro Duo (Fiji) instead? Keep in mind, the v320 is supposed to replace the GTX Titan Z (a dual-GPU card). Dual-GPU cards can potentially be used in 2 separate VMs simultaneously, without the need for SR-IOV or GRID. The only issue would be video output(s).

0 Kudos
TopHatProductio
Hot Shot
Hot Shot

Time for another huge decision/change for the next phase of this project:

 

0 Kudos
TopHatProductio
Hot Shot
Hot Shot

The Radeon Pro Duo arrived in the mail today. Still have to install and test it. That will happen either tonight or tomorrow...

0 Kudos
TopHatProductio
Hot Shot
Hot Shot

I've been on a mission today, ever since the flashed FirePro S9300 x2 arrived in the mail:

The Radeon Pro Duo is ded. Long live the FirePro S9300 x2!

 

On a side note, I've preemptively replaced the HPE PCIe ioDuo MLC 1.28TB I/O Accelerator (641255-001) and SanDisk Fusion ioScale MLC 3.2TB Accelerator (F11-002-3T20-CS-0001). I may bring them back if the Gen9 has room for them...

0 Kudos
TopHatProductio
Hot Shot
Hot Shot

BlissOS didn't go over too well last night. Time for some troubleshooting​​​​​​​...

0 Kudos
TopHatProductio
Hot Shot
Hot Shot

From what I've done on my end, it looks as though the FirePro S9300 x2 behaves well in a macOS guest (at least Mojave) on vSphere*. From what I've watched online, the FirePro S9300 x2 should also behave when split up between multiple Linux KVM guests (Windows 11 - may also apply to Windows 10). Pretty sure this card runs just fine in a Linux guest as well. In all of the tests/scenarios that I've mentioned, the FirePro was flashed to either act as a Radeon Rx Fury or Nano (consumer variants) - though the Radeon Pro Duo also existed. I'm thinking that BlissOS could just be an outlier in this case, and a rabbit hole too deep for me to go down for this project.

As such, unless a software update for BlissOS fixes this oddity before 2023, I'm kicking it from the project for the next year or two. I'll be focusing on LibreNMS as the last major task for this phase of the server project, until I move to the Gen9 server. When I move to the Gen9 server, I'll possibly want more of the FirePro S9300 x2, oddly enough. While it's an old card, it also fills in a gap - the need for multiple GPUs in a single PCIe slot, for a (relatively) affordable price. Its space efficiency and monetary benefits are tough to ignore when SR-IOV and GRID are currently either too expensive for me to implement or locked behind secret handshakes and the need to be a cloud provider.


* Please be sure to follow Apple and VMware's requirements when it comes to virtualising macOS. This would be preferably done on Apple hardware. Furthermore, you should strive to use officially supported hardware configurations and components. Going outside of this will make it next-to-impossible to receive official support.

0 Kudos
TopHatProductio
Hot Shot
Hot Shot

I've installed LibreNMS, haven't learned how to get device auto-detection working yet. Installed Cronicle and used it to resolve a scheduled task(s) issue with Nextcloud. Now working on enabling Nextcloud push_notify and learning more about LibreNMS. BlissOS is gone from the project, and I'm closing in on the last major tasks of this phase of the server project. The next phase requires the Gen9, and I can't hop onto that just yet. Also wanting to get a 2nd FirePro S9300 X2 and a Titan RTX...

0 Kudos
TopHatProductio
Hot Shot
Hot Shot

In the wake of still not having figured out LibreNMS's device/host auto-detect, I've gone on and added many of my commonly-accessed app/service IPv4 addresses by hand. Those include:

  • OOB management appliances
  • multi-node/cluster management instances
  • individual virtual machines
  • hypervisor hosts
  • default gateway for network bridge

I'm also running a simple/quick nmap scan, to look for any obvious hosts that I missed. I've avoided adding:

  • Docker containers
  • switches that comprise the network bridge

for the time being. All of my Docker containers are on one VM. If I ever want to analyse traffic for an individual container, I can still add their individual hostnames later. As for the network switches, all traffic going through them either originates from the default gateway or the DL580 itself (either hypervisor host or one of the individual VMs). If the time ever comes, I can add the switches later as well.

I also took some time to review the DNS records on Cloudflare, and should be a little closer to having proper DMARC/DKIM/SPF. Not perfect by any means, before anyone gets ideas. It's tough to get this crap done right.

Getting Nextcloud's notify_push to work is proving to be very tough. I was hoping to have that and Spreed/Talk HPB running by the end of the year, but I've come to the conclusion that it probably won't happen.

Started looking into ARM servers, just to see what's available on the used market. The answer is, nothing affordable - at least in my area. Was wondering if I could maybe play around with ESXi on ARM64, maybe have an AOSP VM or four? Yeah, that's out the window.

Still waiting to move to the Gen9 in the future...

0 Kudos
TopHatProductio
Hot Shot
Hot Shot

Converted the CentOS Stream VM to Rocky Linux:

0 Kudos
TopHatProductio
Hot Shot
Hot Shot

I purchased a 2nd FirePro S9300 X2. Can't wait to see if I can fit it in the DL580 Gen9...

0 Kudos
TopHatProductio
Hot Shot
Hot Shot

I've come to the conclusion that VDI (for other users) will have to be reserved for if I can get a second DL580 Gen9 after moving out. I'd end up replacing the current FirePro S9300x2 with a Radeon Pro W6800, and moving all FirePro S9300x2's to the dedicated VDI host. A single DL580 Gen9 can power up to three of the FirePros, so six available GPUs total for the VDI host if I ever go for it. Assuming that I threw the same 10x HGST HUSMM8040ASS200/HUSMM8040ASS201's at this host, there'd be a little under 4TB of SAS storage available as well. A mix of GPU-equipped and CPU-only VDI instances (Windows/Linux only) would be possible.

0 Kudos
TopHatProductio
Hot Shot
Hot Shot

After fixing a pesky issue with static routes on the Rocky Linux VM, I've managed to get Wazuh working - for the most part. Can't seem to get a successful makepkg run on Artix OpenRC  today, so that's a major impediment. That VM hosts all of my Docker containers. If that had succeeded, I'd then have to figure out the init script situation. Someone on Discord suggested pulling the Gentoo script. While I wouldn't usually try it, I don't think I have many other options - aside from writing one by hand. That's always fun...

Once everything with Wazuh has been resolved, I may kick out Malwarebytes in its entirety...

0 Kudos
TopHatProductio
Hot Shot
Hot Shot

I've got 2x Silicom PE310G4SPI9L-XR-CX3's on the way. May install them in March if I have a chance (waiting for full-height brackets to arrive)...
0 Kudos