Enthusiast
Enthusiast

New Server Project

Hello! It's been a while since I last posted here with my own topic. I now have a dedicated ESXi server in the works, and am planning to start using it 24/7 by the end of this year or early next year (2021). Here are the specs for the hardware:

 

CSE  :: HPE ProLiant DL580 G7
CPU  :: 4x Intel Xeon E7-8870's (10c/20t each; 40c/80t total)
RAM :: 128GB (32x4GB) DDR3-1333 PC3-10600R ECC
STR  :: 1x HP 518216-002 146GB HDD (ESXi, VMware Linux Appliance, System ISOs) +

                        1x 500GB Seagate Video ST500VT003 HDD (Remote Development VM) +

                        4x HP 507127-B21 300GB HDDs +

                        1x Western Digital WD Blue 3D NAND 500GB SSD (Virtual Flash) +

                        1x Intel 320 Series SSDSA2CW600G3 600GB SSD (VFF)

             1x LSI SAS 9201-16e HBA SAS card (4-HDD DAS) +

                        1x Mini-SAS SFF-8088 to SATA Forward Breakout x4 cable +

                        1x Kingwin MKS-435TL​​​​​​​​​​​​​​ (4x 3.5in HDD cage) +

                        4x IBM Storwize V7000 98Y3241 4TB HDDs

PCIe :: 1x HP 512843-001/591196-001 System I/O board +

                        1x HP 588137-B21; 591205-001/591204-001 PCIe Riser board
GPU :: 1x nVIDIA GeForce GTX 1060 6GB +

                        1x nVIDIA GRID K520
SFX  ::  1x Creative Sound Blaster Audigy Rx

NIC   ::  1x HPE NC524SFP (489892-B21)

I/O    ::  1x HPE PCIe ioDuo MLC I/O Accelerator (641255-001)

FAN  ::  4x Arctic F9 PWM 92mm fans​​​​​​​​ *

PSU  ::  4x 1200W Server PSU's (HP 441830-001/438203-001)

PRP  ::  1x Dell MS819 Wired Mouse

ODD ::  1x Sony Optiarc BluRay drive

 

Parts marked with * are already in-house, but require further planning/modification before they can be added to the server.

 

As of now, the fans aren't really required for functionality. They were meant to help quiet the server down a bit, but they require some modification to work. This part can wait.

 

 

Here is the current software configuration plan for the server:

 

*  Temporary task that will be replaced by a permanent, self-hosted solution

** Can benefit from port forwarding, but will be primarily tunnel-bound

^  Tunnel-bound (VPN/SSH) role - not port forwarded/exposed to the Internet

+ Active Directory enabled - Single Sign On (SSO)

 

 

Here is the current resource allocation plan for the server:

  • VMware NIX Appliance     :: 24/7 - true,  dedicatedHDD - false, dedicatedGPU -  false,      2c/4t + 12GB
  • Temporary/Testing VM     :: 24/7 - false,  dedicatedHDD - false, dedicatedGPU -  true,  12c/24t + 32GB *
  • Windows  Server  2016     :: 24/7 - true,  dedicatedHDD - true,  dedicatedGPU -  false,    8c/16t + 16GB
  • macOS Server 10.14.X      :: 24/7 - true,  dedicatedHDD -  true,  dedicatedGPU -  true,   8c/16t + 16GB        (not to be discussed here)
  • Artix Linux - Xfce ISO       :: 24/7 - true,   dedicatedHDD - true,  dedicatedGPU -  false,    8c/16t + 16GB
  • Windows 10 Enterprise    :: 24/7 - false,  dedicatedHDD - true,  dedicatedGPU -  true,  12c/24t + 32GB *
  • Remote Development VM :: 24/7 - false, dedicatedHDD -  true,  dedicatedGPU -  true,  12c/24t + 32GB *

 

VMs marked with an * cannot be run at the same time. Only one of them can ever run at any given moment. MacOS and Linux would have gotten a Radeon/FirePro (ie., Rx Vega 64), for best compatibility and stability, but market forces have prevented this. Windows 10 gets the Creative Audigy Rx. The MacOS and Linux VMs get whatever audio the GRID K520's provide (either that or a software solution). Windows 10, Remote Development, and the Temp/Testing VM will be put to sleep (or offed) until they are needed (Wake on LAN), since they don't host any essential services.

 

There are three other mirrors for this project, in case you're interested in following individual conversations from the other sites (in addition to this thread).

 

P.S. Out of all the sites that I've ever used, this forum has one of the best WYSIWYG editors I've used in a while Smiley Happy Kudos to the devs!

 

 

 

Message was edited by: TopHatProductions115

This mirror is no longer frozen. Please refer to first reply from 01/03/2021 for more information,,,

Tags (1)
48 Replies
Enthusiast
Enthusiast

Now to address the SSD issue in the background, while I look at security patches and initial VM setup.

0 Kudos
Enthusiast
Enthusiast

Currently trying to make a new datastore via SSH. More info here:

0 Kudos
Enthusiast
Enthusiast

0 Kudos
Enthusiast
Enthusiast

While I'm waiting on comments for the previous issue, and a few ISOs to upload to my server, I can start working on investigating this:

Gotta search for the patches through this page, by entering the details mentioned in the last 3 kb pages:

On a side note, also ran into this when setting up my first VM:

Reserve Memory beforehand, I guess

0 Kudos
Enthusiast
Enthusiast

Just finished installing the vCenter Appliance, and will be using the FLEX (Flash web) client to setup the SSD for virtual flash in a bit. Stay tuned Smiley Happy One step closer...

TXP-Network Does :: ESXi Test Stream! - YouTube

0 Kudos
Enthusiast
Enthusiast

Solved the SSD/Virtual Flash issue! Now onto the next one​​​​​​​...

0 Kudos
Enthusiast
Enthusiast

Removed the HP 491838-001 (NC375i)​​​​​​​https://support.hpe.com/hpesc/public/docDisplay?docId=emr_na-c01951393 due to space constraints, increased RAM to 128GB, purchased 4TB HDDs to replace the 2TB HUA722020ALA330's, and delaying the addition of the SolarFlare NIC.

0 Kudos
Enthusiast
Enthusiast

Currently working on DNS, after which I'll focus on setting up the first VPN solution - SoftEther.

0 Kudos
Enthusiast
Enthusiast

0 Kudos
Enthusiast
Enthusiast

Just got a new GPU in the mail, which may end up replacing the GTX 1060 6GB. Still troubleshooting this issue...

0 Kudos
Enthusiast
Enthusiast

0 Kudos
Enthusiast
Enthusiast

0 Kudos
Enthusiast
Enthusiast

The K80's are coming...

0 Kudos
Enthusiast
Enthusiast

I've got a GRID K520 coming in the mail in about 2 weeks, to replace the Tesla K10. Perhaps I can buy a GRID K2 in the near future, so that I can have all three of the major variants for this card. GRID K520 looks like a GeForce card from inside a VM, if my memory isn't failing me. GRID K2 would be the Quadro variant. Tesla K10 is a pure compute version. I wonder if anything like that exists for Tesla K80...

0 Kudos
Enthusiast
Enthusiast

Just solved another looming issue for the server project. Now to get that SSD working and added to the Virtual Flash resource pool...

TXP-Network Does :: ESXi Server - HBA Storage Array Update! - YouTube

0 Kudos
Enthusiast
Enthusiast

Time for a long-overdue project update. Omitting a lot of steps/details here, for relative brevity. A friend of mine, from Discord (the same one who was kind enough to help me troubleshoot the many of the issues I encountered), had me run a Linux LiveCD on the server to troubleshoot the LSI HBA. For those of you who did not know, the LSI HBA wasn’t working as expected until a few hours ago (late last night). I tested it in my current workstation (Precision T7500 - Windows 10), the server (DL580 G7 – ESXi 6.5u3), and even on my laptop (EliteBook 8770w - Windows 10). When tested on the T7500, the HBA showed up – but none of the 4TB hard drives showed up. The same for the laptop and the server. After a bit of Googling (as the cool kids say), I decided that it may behoove me to try flashing it with the IT firmware, to see if that would fix it. I did so from my laptop, by making use of a powered PCIe dock (to prevent further downtime on the T7500 – running a Minecraft server). I did so, using a GUI application called MegaRAID Storage Manager. The HBA was on v17.X, and now it's on v20.X. The drives also appeared in Windows Device Manager for once. However, they didn't stay in Device Manager for long. They popped in and out, sporadically. I was instructed to reboot after the firmware update was applied. MegaRAID Storage Manager stopped being able to connect to the local server after the reboot it said to do, for the firmware update to take hold. That meant that, if the firmware I flashed was the wrong one, I’d have to resort to using sas2flash. After no luck checking on the HBA from my laptop, I decided to put it in the server, with the Linux LiveCD (as mentioned earlier). The Linux LiveCD was running an older build of Manjaro, and managed to see all of the drives in gparted. However, we were unable to get SMART data for most of the HDDs. If you look closely at the HDD models, you may or may not be able to tell why. However, while I was in the LiveCD, I decided to also try GPT scheming the Intel SSD as well, since messing with it in Windows simply did not work for some reason. A short while later, we tried the latest Manjaro LiveCD available (because Manjaro is my preferred distro with sysemd). That one didn’t see the drives at all, but did still see the HBA. At this point, I saw no other way to validate the HDDs further. I made the decision to test them in ESXi and try to pull SMART data from esxcli. The drives showed up in ESXi, and even allowed for us to pull SMART data – but it was limited, in a different format than most common drives on the market. I was able to add the Intel SSD to the Virtual Flash pool for once, though. As such, this is strictly a partial victory. We have the drives ready for use, presumably. But we don’t know how the drives are doing – which is very different from all of my previous experiences, where I could pull up SMART data immediately after installing the drives. The game is afoot.

0 Kudos
Enthusiast
Enthusiast

On a side note, the results of last night's livestreaming attempt are tempting me to make YouPHPTube part of the project again. If this keeps up, I might actually go for it...

0 Kudos
Enthusiast
Enthusiast

0 Kudos
Enthusiast
Enthusiast

0 Kudos
Enthusiast
Enthusiast

ToDo List for the next few days:

  • Figure out Split Horizon DNS records (Technitium)
  • Setup ejabberd and hMailServer
    • FQDNs and subdomains
    • AD/LDAP integrations
  • Setup Artix Linux VM
    • secondary Technitium instance (AD DNS forwarding)
0 Kudos