Hello! It's been a while since I last posted here with my own topic. I now have a dedicated ESXi server in the works, and am planning to start using it 24/7 by the end of this year or early next year (2021). Here are the specs for the hardware:
CSE :: HPE ProLiant DL580 G7
CPU :: 4x Intel Xeon E7-8870's (10c/20t each; 40c/80t total)
RAM :: 128GB (32x4GB) DDR3-1333 PC3-10600R ECC
STR :: 1x HP 518216-002 146GB HDD (ESXi, VMware Linux Appliance, System ISOs) +
1x 500GB Seagate Video ST500VT003 HDD (Remote Development VM) +
4x HP 507127-B21 300GB HDDs +
1x Western Digital WD Blue 3D NAND 500GB SSD (Virtual Flash) +
1x Intel 320 Series SSDSA2CW600G3 600GB SSD (VFF)
1x LSI SAS 9201-16e HBA SAS card (4-HDD DAS) +
1x Kingwin MKS-435TL (4x 3.5in HDD cage) +
4x IBM Storwize V7000 98Y3241 4TB HDDs
1x nVIDIA GRID K520
SFX :: 1x Creative Sound Blaster Audigy Rx
NIC :: 1x HPE NC524SFP (489892-B21)
I/O :: 1x HPE PCIe ioDuo MLC I/O Accelerator (641255-001)
ODD :: 1x Sony Optiarc BluRay drive
Parts marked with * are already in-house, but require further planning/modification before they can be added to the server.
Here is the current software configuration plan for the server:
* Temporary task that will be replaced by a permanent, self-hosted solution
** Can benefit from port forwarding, but will be primarily tunnel-bound
^ Tunnel-bound (VPN/SSH) role - not port forwarded/exposed to the Internet
+ Active Directory enabled - Single Sign On (SSO)
Here is the current resource allocation plan for the server:
VMs marked with an * cannot be run at the same time. Only one of them can ever run at any given moment. MacOS and Linux would have gotten a Radeon/FirePro (ie., Rx Vega 64), for best compatibility and stability, but market forces have prevented this. Windows 10 gets the Creative Audigy Rx. The MacOS and Linux VMs get whatever audio the GRID K520's provide (either that or a software solution). Windows 10, Remote Development, and the Temp/Testing VM will be put to sleep (or offed) until they are needed (Wake on LAN), since they don't host any essential services.
There are three other mirrors for this project, in case you're interested in following individual conversations from the other sites (in addition to this thread).
P.S. Out of all the sites that I've ever used, this forum has one of the best WYSIWYG editors I've used in a while Kudos to the devs!
Message was edited by: TopHatProductions115
This mirror is no longer frozen. Please refer to first reply from 01/03/2021 for more information,,,
While I'm waiting on comments for the previous issue, and a few ISOs to upload to my server, I can start working on investigating this:
Gotta search for the patches through this page, by entering the details mentioned in the last 3 kb pages:
On a side note, also ran into this when setting up my first VM:
Reserve Memory beforehand, I guess
Just finished installing the vCenter Appliance, and will be using the FLEX (Flash web) client to setup the SSD for virtual flash in a bit. Stay tuned One step closer...
Removed the HP 491838-001 (NC375i)https://support.hpe.com/hpesc/public/docDisplay?docId=emr_na-c01951393 due to space constraints, increased RAM to 128GB, purchased 4TB HDDs to replace the 2TB HUA722020ALA330's, and delaying the addition of the SolarFlare NIC.
Many things happened in the past 12 days:
In addition to events on software side of things. I'll have to do a livestream sometime either today or tomorrow...
I've got a GRID K520 coming in the mail in about 2 weeks, to replace the Tesla K10. Perhaps I can buy a GRID K2 in the near future, so that I can have all three of the major variants for this card. GRID K520 looks like a GeForce card from inside a VM, if my memory isn't failing me. GRID K2 would be the Quadro variant. Tesla K10 is a pure compute version. I wonder if anything like that exists for Tesla K80...
Just solved another looming issue for the server project. Now to get that SSD working and added to the Virtual Flash resource pool...
Time for a long-overdue project update. Omitting a lot of steps/details here, for relative brevity. A friend of mine, from Discord (the same one who was kind enough to help me troubleshoot the many of the issues I encountered), had me run a Linux LiveCD on the server to troubleshoot the LSI HBA. For those of you who did not know, the LSI HBA wasn’t working as expected until a few hours ago (late last night). I tested it in my current workstation (Precision T7500 - Windows 10), the server (DL580 G7 – ESXi 6.5u3), and even on my laptop (EliteBook 8770w - Windows 10). When tested on the T7500, the HBA showed up – but none of the 4TB hard drives showed up. The same for the laptop and the server. After a bit of Googling (as the cool kids say), I decided that it may behoove me to try flashing it with the IT firmware, to see if that would fix it. I did so from my laptop, by making use of a powered PCIe dock (to prevent further downtime on the T7500 – running a Minecraft server). I did so, using a GUI application called MegaRAID Storage Manager. The HBA was on v17.X, and now it's on v20.X. The drives also appeared in Windows Device Manager for once. However, they didn't stay in Device Manager for long. They popped in and out, sporadically. I was instructed to reboot after the firmware update was applied. MegaRAID Storage Manager stopped being able to connect to the local server after the reboot it said to do, for the firmware update to take hold. That meant that, if the firmware I flashed was the wrong one, I’d have to resort to using sas2flash. After no luck checking on the HBA from my laptop, I decided to put it in the server, with the Linux LiveCD (as mentioned earlier). The Linux LiveCD was running an older build of Manjaro, and managed to see all of the drives in gparted. However, we were unable to get SMART data for most of the HDDs. If you look closely at the HDD models, you may or may not be able to tell why. However, while I was in the LiveCD, I decided to also try GPT scheming the Intel SSD as well, since messing with it in Windows simply did not work for some reason. A short while later, we tried the latest Manjaro LiveCD available (because Manjaro is my preferred distro with sysemd). That one didn’t see the drives at all, but did still see the HBA. At this point, I saw no other way to validate the HDDs further. I made the decision to test them in ESXi and try to pull SMART data from esxcli. The drives showed up in ESXi, and even allowed for us to pull SMART data – but it was limited, in a different format than most common drives on the market. I was able to add the Intel SSD to the Virtual Flash pool for once, though. As such, this is strictly a partial victory. We have the drives ready for use, presumably. But we don’t know how the drives are doing – which is very different from all of my previous experiences, where I could pull up SMART data immediately after installing the drives. The game is afoot.
On a side note, the results of last night's livestreaming attempt are tempting me to make YouPHPTube part of the project again. If this keeps up, I might actually go for it...
ToDo List for the next few days: