VMware Cloud Community
TopHatProductio
Hot Shot
Hot Shot

New Server Project

Hello! It's been a while since I last posted here with my own topic. I now have a dedicated ESXi server in the works. The server project is meant to replace (and exceed) my previous workstation - a Dell Precision T7500. Here are the specs for the hardware:

 

HPE ProLiant DL580 G7

 

 

    OS   :: VMware ESXi 6.5u3 Enterprise Plus
    CPU  :: 4x Intel Xeon E7-8870's (10c/20t each; 40c/80t total)
    RAM  :: 256GB (64x4GB) PC3-10600R DDR3-1333 ECC
    PCIe :: 1x HP 512843-001/591196-001 System I/O board + 
                1x HP 588137-B21; 591205-001/591204-001 PCIe Riser board
    GPU  :: 1x nVIDIA GeForce GTX Titan Xp +
                1x AMD FirePro S9300 x2 (2x "AMD Radeon Fury X's")
    SFX  :: 1x Creative Sound Blaster Audigy Rx
    NIC  :: 1x HPE NC524SFP (489892-B21) +
                2x Silicom PE310G4SPI9L-XR-CX3's
    STR  :: 1x HP Smart Array P410i Controller (integrated) +
                1x HGST HUSMM8040ASS200 MLC 400GB SSD (ESXi, vCenter Appliance, ISOs) + 
                4x HP 507127-B21 300GB HDDs (ESXi guest datastores) +
                1x Western Digital WD Blue 3D NAND 500GB SSD + 
                1x Intel 320 Series SSDSA2CW600G3 600GB SSD +
                1x Seagate Video ST500VT003 500GB HDD
    STR  :: 1x LSI SAS 9201-16e HBA SAS card +
                1x Mini-SAS SFF-8088 cable + 
                        1x Dell EMC KTN-STL3 (15x 3.5in HDD enclosure) + 
                                4x HITACHI Ultrastar HUH728080AL4205 8TB HDDs +
                                4x IBM Storewise XIV v7000 98Y3241 4TB HDDs
    I/O  :: 1x Inateck KU8212 (USB 3.2) +
                1x Logitech K845 (Cherry MX Blue) +
                1x Dell MS819 Wired Mouse
            1x Sonnet Allegro USB3-PRO-4P10-E (USB 3.X) +
                1x LG WH16NS40 BD-RE ODD
    PRP  :: 1x Samsung ViewFinity S70A UHD 32" (S32A700)
            1x Sony Optiarc BluRay drive
    PSU  :: 4x HP 1200W PSUs (441830-001/438203-001)

 

 


The details for the ProLiant DL380 Gen9 will appear here once data migration is complete. VMware Horizon (VDI) will have to wait for a future phase (if implemented at all). The current state of self-hosted VDI is Windows-centric, with second-hand support for Linux and no proper support for macOS.

The planned software/VM configurations have been moved back to the LTT post, and will be changing often for the foreseeable future.

Product links and details can be found here.

 

ESXi itself is usually run from a USB thumb drive, but I have a drive dedicated to it. No harm done. A small amount of thin provisioning/overbooking (RAM only) won’t hurt. macOS and Linux would have gotten a Radeon/FirePro (ie., Rx Vega 64), for best compatibility and stability, but market forces originally prevented this. Windows 10 gets the Audigy Rx and a Titan Xp. The macOS and Linux VMs get whatever audio the Titan Z FirePro S9300 x2 can provide. The whole purpose of Nextcloud is to phase out the use of Google Drive/Photos, iCloud, Box.com, and other externally-hosted cloud services (Mega can stay though).

 

There are three other mirrors for this project, in case you're interested in following individual conversations from the other sites (in addition to this thread).

 

P.S. Out of all the sites that I've ever used, this forum has one of the best WYSIWYG editors I've used in a while Smiley Happy

Kudos to the devs!

Tags (1)
259 Replies
TopHatProductio
Hot Shot
Hot Shot

Just removed YaCy from the project, in favour of researching YaCy Grid. Here's to hoping I can get it working in a shared environment...

Reply
0 Kudos
TopHatProductio
Hot Shot
Hot Shot

I currently have a PCIe WiFi NIC coming in the mail. I also have a pair of Ethernet NICs sitting in inventory. The server already has a SolarFlare SFN5322F sitting in it. What if I threw FRRouting onto a Linux VM, and passed through the mentioned NICs to it? Sounds like a virtual managed switch in the making. I could have the Linux VM use the wireless NIC to connect to the house WiFi on one network (192.168.1.0), and have it sit at an arbitrary address (Perhaps 192.168.1.2). Then have the wired NICs be used for an internally-managed network (10.0.0.0). Setup the Linux VM as the default gateway (Maybe 10.12.7.1), have it handle DHCP and internal DNS. Last step would be to route all outbound traffic from clients on 10.0.0.0 through 10.12.7.1 => 192.168.1.2 . All outbound traffic from 10.0.0.0 clients will appear to come from 192.168.1.2, which sounds similar to NAT (many clients/private IPs behind one gateway/public IP). Setup forwarding rules and throw the Linux VM sitting at 192.168.1.2 into the DMZ (since port forwarding on the new ISP router is utter garbage for some reason). That would kill off the need for a router/extender in my room. Also still need to work on this. The rack-mounting kit for my server is ~200 USD by itself - yikes...

Reply
0 Kudos
TopHatProductio
Hot Shot
Hot Shot

It's been a slow weekend playing with the server. On Thursday, I couldn't get anything done because of New Year's (which I am fine with). On Friday, I slept in due to how late I stayed up, and then had surprise visitors. Didn't get any work done that day, since I was busy keeping the visitor's kids out of the room. On Saturday, I finally got to throw in the HP NC524SFP NIC (along with its memory module). Once they were attached to the SPI board, I fired up the server and checked to see if the 16TB drive cage and ~1TB Virtual Flash Resource Pool showed up in ESXi - that of which they did.

FYI, just about every time I add new hardware to the DL580 G7, I check for those two things - because they tend to act as immediate indicators for whether something is wrong, strangely enough. That's when no other problem indicators are present (which there rarely ever are). vCenter has thrown an occasional warning, but nothing of consequence from what I've seen thus far.

After that, I spent most of last night changing my AD and DNS settings, to prepare for adding my first devices to AD. That went on until close to midnight, and is still not quite done yet. Today, I replaced the SolarFlare SFN5322F with an HPE 641255-001 (PCIe ioDuo MLC I/O Accelerator) - a gutsy move with how the server can be with adding new hardware. At first, only 2/4 SAS HDDs showed up in ESXi. After a reboot, and letting the server warm up for a bit, all storage devices and new components showed up. So far, so good!

However, due to how slow testing has been, I had to put off testing the Tesla K80's and DERAPID PCE-AX200T wireless NIC. If I can get the DERAPID PCE-AX200T working, the Linux VM is definitely going to run an FRRouting instance. Still need to figure out the vCenter startup time issue. At least I can start the 10GbE transition soon...

 

https://www.youtube.com/watch?v=BsHh6jOhrxI

 

Reply
0 Kudos
TopHatProductio
Hot Shot
Hot Shot

Onto the next task!

Initial hardware testing is coming close to an end...

Reply
0 Kudos
TopHatProductio
Hot Shot
Hot Shot

Just attached the rail kit to the server, in preparation for the rack that's coming in the mail this week. Can't wait to take photos of the finished result...

Reply
0 Kudos
TopHatProductio
Hot Shot
Hot Shot

Getting ready to kick ejabberd from Windows Server, due to reliability issues observed during initial testing. Probably going to the Arch VM instead. Also need to upgrade the vCenter Appliance from 6.5 to 6.7u3, due to FLEX getting EOL'd...

https://www.reddit.com/r/activedirectory/comments/kyxf73/setting_up_my_first_active_directory/

 

Reply
0 Kudos
TopHatProductio
Hot Shot
Hot Shot

From what I can tell, I may have to start from scratch with both vCenter and AD. But, if I manage to pull it off, I would have a few spare CPU cores and a datastore to use for something else.

Also, found this:

Time to see if I can find instructions for OS's outside of Windows...

Reply
0 Kudos
TopHatProductio
Hot Shot
Hot Shot

I would have held out for VCSA 6.5 indefinitely if the HTML5 UI was able to manage Virtual Flash/Host Cache resource pools. As noted in past updates, the VCSA took anywhere from 20-45 minutes to initialise. And with the deprecation of FLEX UI (reliant on Adobe Flash - unsupported in 2021), the now-neutred vCenter Server Appliance VM (6.5) had no practical place in this project. Without the option for an in-place upgrade to a newer version, I also do not have the ability to upgrade to VCSA 6.7. It has been replaced, and will soon be decommissioned. vCenter has been moved to the Windows Server 2016 VM, for practicality reasons. The next step is to re-build the failed MS AD instance and promote a new domain controller. That will happen later this week. Hopefully, things will go a bit better this time around...

Reply
0 Kudos
TopHatProductio
Hot Shot
Hot Shot

Alright - everything is almost ready for Active Directory setup, attempt #2. Not only did I kick ejabberd to Linux (due to issues when installed on Windows), but I also had to re-install multiple other applications. Demoting the AD DC appears to have been what led to it. So, I got to start from scratch in some sense. Still need to make a new SQL db for hMailServer, unlike last time. But, that should be relatively easy. Already installed vCenter Server, and it starts up way faster than the VCSA. Windows doesn't even take longer to boot from what I've seen. Had what appears to have been an unexpected part failure as well - the Mini-SAS SFF-8088 to SATA Forward Breakout x4 cable. Got that replaced, and now can see all of my SAS HDDs once again. Last step is to (re-)promote the DC and test client devices. This time, I'll set the intended domain from the start (instead of setting it to something else by accident and having to change it twice later).

Reply
0 Kudos
continuum
Immortal
Immortal

Due to your "not to be discussed here" flag - just for your information:

Using the Unlocker patch on recent VMFS 6 volumes can cause vmfs-locks of type: abcdef03.
I would not use it on production hosts  ...

Ulli


________________________________________________
Do you need support with a VMFS recovery problem ? - send a message via skype "sanbarrow"
I do not support Workstation 16 at this time ...

TopHatProductio
Hot Shot
Hot Shot

Thank you for letting me know. I will keep this in mind for when the time comes 🙂 

Reply
0 Kudos
TopHatProductio
Hot Shot
Hot Shot

After 3 failed attempts, successAfter 3 failed attempts, success

 

Backup Complete!

Reply
0 Kudos
TopHatProductio
Hot Shot
Hot Shot

I had to disable the vCenter Server for Windows instance to get Active Directory instance installed. But I didn't think to change any networking settings on vCenter Server (embedded - 6.7) before disabling it. With the help of a friend, I managed to fix my DNS and get the Active Directory instance working. It was due to some missing NS records. Once I cleaned up the DNS, I was actually able to get a client device joined to the AD. Now I have to figure out how to make Windows clients connect to the VPN before attempting LDAP sign in, since the AD is VPN-locked. Once I figure that out, I will be able to add any devices I want. Also have to see if I can bind vCenter Server for Windows to a single IP address while it's disabled. Otherwise, I'll have to resort to using the VCSA again - and who knows how that will go in long-term. The last time I used it, it started throwing up more warnings and errors than ever before, which leads me to question the overall longevity and performance of it. ESXi was just fine when I monitored the performance metrics, and the server was nowhere near full utilisation - ever. Almost tempted to go without vCenter and Virtual Flash because of the trouble. But then I lose out on features, and the funds I used to acquire vCenter in the first place. At least I can start focusing on the rest of the project more in the near future...

Without vCenter, how will I be able to add the Precision T7500, as an ESXi host, to my datacentre? For vMotion?

Also have to figure out whether (and if so, how) to destroy the Virtual Flash resource pool or not...

Reply
0 Kudos
TopHatProductio
Hot Shot
Hot Shot

Just bought four of these:

HITACHI Ultrastar HUH728080AL4205 (HGST)

32TB upgrade, here I come...

Reply
0 Kudos
TopHatProductio
Hot Shot
Hot Shot

vCSA 6.5 is practically neutred without the use of Adobe Flash, and the HTML5 UI was almost useless until at least 6.7u3. The settings I do have in the current install are mostly small ones, but could only be reversed via the FLEX UI (Flash). vCenter also doesn't allow for in-place upgrades. So, it's time to kill the current vCSA and start from scratch. If I had known to look out for the death of Flash, I could have been ahead of this. But, got held up by other responsibilities. Today, I'm re-installing vCSA. Today's going to be a long day...

Reply
0 Kudos
TopHatProductio
Hot Shot
Hot Shot

Well, I have more news. I managed to kill the old vCSA (6.5) instance and replace it with a newer (6.7) version. The newer version has a dark theme - nice. Also is pretty well organised, and connected to my ESXi server with no issues. However, Virtual Flash is pretty much dead. I will have to assign the SSDs to something else now. Perhaps I can start setting up the next VM...

 

On a side note, the current Reddit project mirror is ded again - because those expire every 6 months, regardless of activity. I think it'll stay ded this time. Not in the mood to make yet another one...

Reply
0 Kudos
TopHatProductio
Hot Shot
Hot Shot

Currently installing Ms SQL Server 2019 for a test drive. Then migrating over to the 8TB SAS HDDs completely.

Gonna have to redo the backups - had no way of imaging the 4TB HDD before swapping in the 8TB HDD. But enough changes have been made that the old backup is no longer valid.

Reply
0 Kudos