VMware Cloud Community
virtualoverlord
Contributor
Contributor

Dual-CPU dilemma

Hi all,

We’re have a bit of a dual cpu dilemma, and I’m clutching a straws a bit as I’m sure I know what the answer here will be. Basically we have 4 systems that will be part of an Enterprise Plus cluster. We have the 4 x 1CPU licenses, and all is well. However, we also bought some large (12.8TB each) NVMe U.2 drives (for HCI), and found that the systems weren’t seeing any of the drives. After some debugging, it turns out that the server physically attaches the U.2 ports to the *second* CPU, and the first CPU is required! That means a second CPU will be needed on each system to use the drives.

The hardware costs are annoying, but really not an issue (about $8K for 4 more scalable CPU’s). However, the dilemma is the ESXi licensing! Doubling the CPU count, just to get the drives to work, adds over $17K in license keys to the cost! Had we known this earlier we would have followed a different strategy, but the drives are non-returnable at this point. I’ve got some other ways around this (all involving extra physical systems, but keeps the license issue out of it), but they are all sub-optimal.

Annoyingly neither the reseller or hardware vendor caught this issue during validation, but neither are of course willing to eat the $25K cost (hardware plus licenses).

So, any ideas anyone? I don’t believe we can virtually “turn off” one of the CPUs to reduce the licensing, so even though we don’t need the extra cores at all, VMware will want their license (the bios has no option to disable/shield the cpu cores for the OS, and I don’t really want to run ESXi on top of KVM to virtualize virtual CPU’s)!

Thanks!

0 Kudos
2 Replies
berndweyand
Expert
Expert

had this years ago with dell servers - the lowprofile slots are only usable with second cpu.

what server hardware do you have ?

0 Kudos
virtualoverlord
Contributor
Contributor

Hi,

The U.2 ports in this case are on the hot swap 2.5” ports on the front of Supermicro SBI-4429-T2N blades...

The drive bays can accept SATA, SAS or NVME U.2 drives. The drives alone were over $25K, and since they aren’t returnable, it would be a pretty big hit to just swap them to lower performing SAS SSD’s...

I’m thinking I’m going to have to go with either extra blades running dual-CPU (lesser Bronze ones), and baremetal storage software, or the same with a completely external 1U/2U solution.

That would be quite annoying as I really wanted to leverage hyper convergence, and external dedicated storage nodes aren’t very hyper converged 😕

Thanks!

0 Kudos