Highlighted
Enthusiast
Enthusiast

New Server Project

Hello! It's been a while since I last posted here with my own topic. I now have a dedicated ESXi server in the works, and am planning to start using it 24/7 by the end of this year or early next year (2021). Here are the specs for the hardware:

 

CSE  :: HPE ProLiant DL580 G7
CPU  :: 4x Intel Xeon E7-8870's (10c/20t each; 40c/80t total)
RAM :: 128GB (32x4GB) DDR3-1333 PC3-10600R ECC
STR  :: 1x HP 518216-002 146GB HDD (ESXi, VMware Linux Appliance, System ISOs) +

                        1x 500GB Seagate Video ST500VT003 HDD (Remote Development VM) +

                        4x HP 507127-B21 300GB HDDs +

                        1x Western Digital WD Blue 3D NAND 500GB SSD (Virtual Flash) +

                        1x Intel 320 Series SSDSA2CW600G3 600GB SSD (VFF)

             1x LSI SAS 9201-16e HBA SAS card (4-HDD DAS) +

                        1x Mini-SAS SFF-8088 to SATA Forward Breakout x4 cable +

                        1x Kingwin MKS-435TL​​​​​​​​​​​​​​ (4x 3.5in HDD cage) +

                        4x IBM Storwize V7000 98Y3241 4TB HDDs

PCIe :: 1x HP 512843-001/591196-001 System I/O board +

                        1x HP 588137-B21; 591205-001/591204-001 PCIe Riser board
GPU :: 1x nVIDIA GeForce GTX 1060 6GB +

                        1x nVIDIA GRID K520
SFX  ::  1x Creative Sound Blaster Audigy Rx

NIC   ::  1x SolarFlare SFN5322F

FAN  ::  4x Arctic F9 PWM 92mm fans​​​​​​​​ *

PSU  ::  4x 1200W Server PSU's (HP 441830-001/438203-001)

PRP  ::  1x Dell MS819 Wired Mouse

ODD ::  1x Sony Optiarc BluRay drive

 

Parts marked with * are already in-house, but require further planning/modification before they can be added to the server.

 

As of now, the fans aren't really required for functionality. They were meant to help quiet the server down a bit, but they require some modification to work. This part can wait.

 

 

Here is the current software configuration plan for the server:

 

*  Temporary task that will be replaced by a permanent, self-hosted solution

** Can benefit from port forwarding, but will be primarily tunnel-bound

^  Tunnel-bound (VPN/SSH) role - not port forwarded/exposed to the Internet

+ Active Directory enabled - Single Sign On (SSO)

 

 

Here is the current resource allocation plan for the server:

  • VMware NIX Appliance     :: 24/7 - true,  dedicatedHDD - false, dedicatedGPU -  false,      2c/4t + 10GB
  • Temporary/Testing VM     :: 24/7 - false,  dedicatedHDD - false, dedicatedGPU -  true,  12c/24t + 32GB *
  • Windows  Server  2016     :: 24/7 - true,  dedicatedHDD - true,  dedicatedGPU -  false,    8c/16t + 12GB
  • macOS Server 10.14.X      :: 24/7 - true,  dedicatedHDD -  true,  dedicatedGPU -  true,    8c/16t + 12-16GB        (not to be discussed here)
  • Artix Linux - Xfce ISO       :: 24/7 - true,   dedicatedHDD - true,  dedicatedGPU -  false,    8c/16t + 12GB
  • Windows 10 Enterprise    :: 24/7 - false,  dedicatedHDD - true,  dedicatedGPU -  true,  12c/24t + 32GB *
  • Remote Development VM :: 24/7 - false, dedicatedHDD -  true,  dedicatedGPU -  true,  12c/24t + 32GB *

 

VMs marked with an * cannot be run at the same time. Only one of them can ever run at any given moment. MacOS and Linux would have gotten a Radeon/FirePro (ie., Rx Vega 64), for best compatibility and stability, but market forces have prevented this. Windows 10 gets the Creative Audigy Rx. The MacOS and Linux VMs get whatever audio the GRID K520's provide (either that or a software solution). Windows 10, Remote Development, and the Temp/Testing VM will be put to sleep (or offed) until they are needed (Wake on LAN), since they don't host any essential services.

 

There are three other mirrors for this project, in case you're interested in following individual conversations from the other sites (in addition to this thread).

 

P.S. Out of all the sites that I've ever used, this forum has one of the best WYSIWYG editors I've used in a while Smiley Happy Kudos to the devs!

 

 

 

Message was edited by: TopHatProductions115, This project mirror is open again

Tags (1)
41 Replies
Highlighted
Enthusiast
Enthusiast

Just a quick note that flash read cache is now deprecated - vFlash Read Cache Deprecation Announced - VMware vSphere Blog

Edited to add that you would be violating the macOS EULA and I suspect VMware's EULA by running macOS on non Apple hardware.

It also looks like the latest version of ESXi that a DL580 G7 supports is 6.0 U3.

Highlighted
VMware Employee
VMware Employee

Moderator: Running MacOS on anything other than Apple hardware would be a violation of the Apple EULA and cannot be discussed here.

This post can remain in relation to all the other things you are doing, but please keep MacOS out of any discussion.


Forum Usage Guidelines: https://communities.vmware.com/docs/DOC-12286
VMware Training & Certification blog: http://vmwaretraining.blogspot.com
Highlighted
Enthusiast
Enthusiast

Understood. I'll keep MacOS out of the discussion here. Thanks for the heads-up.

0 Kudos
Highlighted
Enthusiast
Enthusiast

The version(s) of ESXi that apply to my use case were released before the deprecation, weren't they? Did VMware go back and retroactively remove the feature from older ESXi images? Or is it just no longer officially supported? Just trying to make sure I know what I'll be facing.

0 Kudos
Highlighted
Enthusiast
Enthusiast

Deprecated means the feature will be removed in a future release however that feature will remain in earlier versions.

Out of interest, why are you choosing a DL580 ? Are you able to get one super cheap? I would have thought it would cost an absolute fortune to run.

0 Kudos
Highlighted
Enthusiast
Enthusiast

  1. Thanks for the clarification on deprecation.
  2. I've been using a Dell Precision T7500 with 48GB of DDR3 ECC and a mid-range graphics card. The DL580 G7 came to me for under 300 USD, and can take a lot of the parts that I already have on-hand. Parts for it also have been super cheap lately, which allows me to have spare parts inventory for it (in case something goes wrong). I'm also working toward using HPE-branded drives, to keep the fans from ramping up too easily. The only concern I have at the moment is power consumption, but that's being looked into by my household already.
0 Kudos
Highlighted
Enthusiast
Enthusiast

Currently waiting on some HP-branded SAS drives for the server, since those have the potential to affect the acoustics in a positive manner (reduced sound output). Can't wait to test them out when they arrive. Then I'll be able to test a VMware ISO on the server. I'll be sure to document how that goes.

0 Kudos
Highlighted
Enthusiast
Enthusiast

Made some changes to the SAS HDD choices I'm using, due to compatibility and acousitcs reasons. While I could go and LLF the whacky NetApp drives I purchased, I'd still have to put up with a noisier server afterward. I'd rather move in a different direction, and restrict that issue to my decisions in PCIe cards instead. Also removed the old HITACHI HDD, since it didn't really belong in this project. It's SATA 1 or 2 iirc. Here are the items I kicked from the project:

  • (1x) 250GB HITACHI HTS542525K9SA00
  • (4x) 600GB HGST NetApp X422A-R5 SAS

Still looking to see if I can get the Dell mouse...

0 Kudos
Highlighted
Enthusiast
Enthusiast

Currently looking into making a custom ESXi 6.5 image for the DL580 G7, since official support was axed after 6.0. I already own the license, and I'd rather not waste it in laziness. It wouldn't be the first time I had to do something like this. On a side note:

0 Kudos
Highlighted
Enthusiast
Enthusiast

Just removed a Tesla K10 from the project. It's been reduced to a spare component, for the sake of noise reduction and power concerns. Artix Linux is no longer in line to receive a GPU. MacOS will take over the F@H role. If you have any questions, feel free to ask.

0 Kudos
Highlighted
Enthusiast
Enthusiast

Once I buy this cable (to power the HBA disk array), the server project will be ready to go. I definitely should list the E7-2870's, since I can't use those with the server.

The interesting part is, it has molex and other endings on it to. Multipurpose...

0 Kudos
Highlighted
Enthusiast
Enthusiast

I've ordered the cable. Now I'm just waiting on the postal service to get it to me, so I can (possibly) begin the project this Wednesday. The last week I'll spend in pre-installation delay. I should be able to test out ESXi 6.5 installation today, as long as no one/thing interferes this afternoon...

0 Kudos
Highlighted
Enthusiast
Enthusiast

Delaying initial ESXi testing to this Thursday, since today got swamped with unexpected events.

0 Kudos
Highlighted
Enthusiast
Enthusiast

Currently looking into VMware Horizon 7, for app virtualisation. Time to see if I can beat Turbo/Spoon at their own game.

If I can move UnGoogledChromium and KeePass 2 to a remote instance, I'd call that the first step to success...

0 Kudos
Highlighted
Enthusiast
Enthusiast

Replaced the Rosewill RASA-11001 with a Kingwin MKS-435TL, due to the fact that it doesn't need molex and will also have a cleaner look to it.

0 Kudos
Highlighted
Enthusiast
Enthusiast

Grabbed the wired mouse. Now back to the waiting game, to see when everything will arrive in the mail...

0 Kudos
Highlighted
Enthusiast
Enthusiast

The new drive cage arrived via Amazon. There's only one item left that hasn't arrived in the mail yet - my new mouse...

0 Kudos
Highlighted
Enthusiast
Enthusiast

0 Kudos
Highlighted
Enthusiast
Enthusiast

Onto the next issue:

Also have to look into this at some point (even though the server will be mostly hiding behind a VPN):

0 Kudos