VMware Cloud Community
spigotadmin
Contributor
Contributor

Nested ESXi 6.7 running under KVM - VM performance unusable

I have been trying to set up a lab environment running a couple of ESXi 6.7 hosts under QEMU/KVM.  I've spent many hours going through this forum and other sources of info but am stuck on getting a stable/usable environment with a couple of Windows Server VMs.

KVM Host -

- Host CPUs - 2x Xeon E5620 (Verified the CPUs support VT-x with EPT)

- Centos 7.5 - fully updated

- QEMU emulator version 2.12.0 (qemu-kvm-ev-2.12.0-18.el7_6.1.1)

- cat /sys/module/kvm_intel/parameters/nested - Y

- cat /sys/module/kvm_intel/parameters/ept - Y

- cat /sys/module/kvm/parameters/ignore_msrs - Y

ESXi VM created -

virt-install --name esxi67 --ram 8192 --disk path=/virtstore-raid10-hdd/esxi67.qcow2,size=80,bus=sata --cpu host-passthrough --vcpus=8 --os-type linux --os-variant=virtio26 --network bridge=br1,model=e1000 --graphics spice,listen=0.0.0.0,password=password --video qxl --cdrom /var/lib/libvirt/isos/esxi-67.iso --features kvm_hidden=on --machine q35

ESXi host boots and runs and seems stable.

ESXi Nested Host -

- Version - 6.7.0 (Build 8169922)

- Added the following to /etc/vmware/config

       - hv.assumeEnabled = "TRUE"

       - vmx.allowNested = "TRUE"

- I have two VMs created - one with Server 2012R2 and one with Server 2008R2 - generic settings - 4 CPUs / 4GB RAM

- Added "vmx.allowNested = "TRUE" to both through WebUI

- Both machines will boot - sort of - the Server 2012 install hangs at the boot splash for hours at times - once they boot, they are sort of functional but freeze up constantly

- If I try to enable "

I have spent hours reading forums and trying different combinations of settings but cannot get a stable/usable environment.

Does anyone have any ideas or insights of what I am missing?  I think the main issue is that I cannot enable Hardware Virtualization on the nested VMs, but everything should be set up correctly for the Nested VMs to see VT-x/EPT as available on their CPUs.

Sincerely,

Dave Redmore

0 Kudos
5 Replies
daphnissov
Immortal
Immortal

Not sure you're going to have a whole lot of success doing this nested on KVM. Nested virtualization is always not a very performant way to host a lab, and on top of KVM is a big question mark. You might meet with more success on a KVM-oriented forum.

0 Kudos
luxturbo
Contributor
Contributor

I am in the same boat. Plenty of articles and how-tos written on the subject with apparent success, but when I implement the steps things just hang and take forever. Though this is not a supported configuration and there is no help to be had on this forum, I want to document what I have found for anyone else crazy enough to try this.

My environment

CPU : 2 x  Xeon E5-2680 v2

OS : CentOS 7 6.1810.2

Kernel : 3.10.0-957.1.3

libvirt : 4.5.0-10

qemu : 10:2.10.0-21

Findings so far

After installing esxi 6.7.0 and starting a VM, things get really slow and cause the VMs to hang. If I look at the kvm host I can see one core being used at 100% even though 10 cores are provided to esxi, and 4 cores allocated to the VM in esxi. At this point I am not sure what is causing this as none of the logs I have looked at have revealed anything meaningful. My next steps are to

  • Modify CPU topology kvm provides to esxi. Though pass-through is enabled, it may be expecting something it is not getting.
  • Enable IOMMU on the KVM host. I doubt this will do anything, but it's a thing to try.
  • Downgrade esxi to 6.6, then 6.5, etc.
  • Abandon running esxi nested and rebuild another server for this purpose. It would be wasteful, but it is what it is.

I plan on keeping this thread updated with my findings.

0 Kudos
luxturbo
Contributor
Contributor

So far no dice, I even tried pinning the VM to a specific CPU without success. Further reading has shown evidence this is a "known" issue with 64bit VMs running in a nested esxi environment. Supposedly newer Linux kernels are able to handle this better. I am going to try running a 32bit VM and see if anything changes, then regardless of the outcome I am going to install Fedora and try one more time.

0 Kudos
luxturbo
Contributor
Contributor

32bit VMs have the same problem. I am going to export the VM, install Fedora Server 29, then try again.

0 Kudos
luxturbo
Contributor
Contributor

I have not had a chance to install Fedora, but I did speak with an engineer that works with vmware. During a system upgrade, I asked the engineer to look at my server while we waited for things to reboot. He compared my setup with that of his lab (he runs esxi nested in kvm as well). He could not find anything wrong my system and agrees that it most likely is the older kernel.

After my many many other tasks are done, I will give this a shot.

0 Kudos