Hello there. I have just received a 'new to me' server, with 2 x xeon 25 2650 V2 processors and 64 gigs of ram.
I have not got deep into things yet just installed ESXi on an ssd in the system, nothing else yet.
It installed fine and there were no warning as about end of life on these processors.
It launchs fine and whilst I have not created a VM yet I can go through the entire vm set up procedure, assign cpu and ram etc.
However in the hardware section there are tonnes of greyed out lines saying not capaable for SR-IOV and passthrough, including
v2/Xeon E5 crystal beach channel x
v2/xeon E5 PCI express root
Intel C600/X79 sereis chipset controller
I feel this is something I should be concerned about, but not sure how to proceed.
In the bios it says hyperthreading and intel virtual stuff is enabled.
Any thoughts would be appreciated.
PCIe device passthrough requires VT-d to be enabled. In some systems, VT-d is a separate option from VT-x that needs to be enabled in the UEFI/BIOS. It might be called VT-d, VT for Direct I/O, or VT for IOMMU.
As already mentioned, you don't want to be passing those devices (like the chipset) to a VM.
The concept of device passthrough is that the host (in this case ESXi) will not control the device (and thus not load its device drivers) and let a VM (the receipient of the passthrough) to control it as a native device through the guest OS.
Warning, the user interface that lets you enable device passthrough does not check whether the device passthrough is a critical device for ESXi itself. There are more than a handful of posts here over the years where someone enables passthrough of the storage controller where the datastore is connected and they lose access to their datastore.
Hi there both. Thanks for this that puts my mind at rest I don’t intend using it in this way.
I successfully loaded a windows vm, it’s doing that think with me selecting four cores and it only showing two but I know how to fix that
I think there are more posts asking for help about GPU passthrough to Windows VMs of unsupported cards (such as Nvidia GeForce cards) than about cards that are in VMware HCL for GPU passthrough.
it’s doing that think with me selecting four cores and it only showing two but I know how to fix that
It depends on the Windows version and the number of virtual sockets.
For example, Windows 10 Professional can recognise only two sockets (whether physical PC or virtual machine). So if the four vCPUs is configured across four virtual sockets (i.e. 1 core per socket), it will only recognise two vCPUs as it can only recognise 2 virtual sockets. But if the four vCPUs is configured across 1 or 2 virtual sockets (4 cores in 1 socket or 2 cores per socket), all four vCPUs are recognised.
Thanks for this I will try and get my head round that later when I am home. I have not gone into settings much when setting up just tole it how many cores I wanted at 4, presumably there is a way to tell it to take the cores from the same cpue or whatever?
I have two physical CPUs.
Options etc have changed on this new server compared to my old server, presumably because the cpus are newer.
There should be an arrow to expand the view instead of just the showing number of CPUs in the VM settings of the ESXi host client and there should be a socket count shown. The options should be the same regardless of host CPU configuration.
If the host has two physical CPUs, it is better to enable NUMA at the host UEFI/BIOS (assuming that it is an option). For any VM, I don't think it will see the NUMA architecture of the host but the ESXi hypervisor takes advantage of it.
Windows DESKTOP OS are limited to 2 CPU Sockets. So if you configure 4 vCPU for the given VM you should expand the CPU option and specify if you would present the 4 CPUs as 1x4 Core, 2x2Core but not 4x1Core because than the OS only see the half because of License/Product limitations(but thats MS and not VMware!).