VMware Cloud Community
xchose
Contributor
Contributor
Jump to solution

Virt. itself esxi5.1 on 5.0 - longmode required 64 bit

Hello,

I have physical server Dell powerEdge R710 with ESXI 5.0.I would like to build LAB enviroment so I installed on this Physical host 2 virtual host ESXi 5.1, vcenter, web vcenter etc. Everythink is workiing fine since I tried to run on virtualized host VM with 64bit OS.

I recieved error like:longmode required 64 bit-guest OS support. see picture.

I am not sure what is going on. Please sent me some links or help if you already saw this error.

Reply
0 Kudos
1 Solution

Accepted Solutions
Datto
Expert
Expert
Jump to solution

Okay, let's try this again.

I just checked this procedure/instructions outlined below using ESXi 5.0U1 623860 installed on the physical ESXi host using ESXi 5.1a 838463 as the nested ESXi VM and then loaded up a Windows 2008 R2 SP1 x64 VM running on the nested ESXi 5.1a VM and the x64 Windows 2008 R2 SP1 VM worked fine (although much slower than if all were running on a physical server of course).

---------

Assuming the required physical CPUs in your physical ESXi 5.0 host have EPT capability (required for Intel CPUs) or RVI capability (required for AMD CPUs), then make sure you have:

vhv.allow = TRUE

inserted into the /etc/vmware/config file of your phyiscal ESXi 5.0 host (this is required in order to get nested ESXi hosts to work but no reboot of the physical ESXi 5.0U1 host should be necessary for this setting to get engaged).

Create a VM for the ESXi 5.1a host but initially choose "Other x64" o/s type and choose E1000E as the NIC type but don't power it on yet.

Once the VM is created and still powered off, change the O/S type to ESX 5.x (which should now be exposed and available to choose).

Also in the ESXi 5.1a VM, under Edit Settings, choose the Option Tab and under CPU/MMU Virtualization Settings choose the bottom choice.

At this point, you may or may not need a special CPU mask from J Mattson here in this community forum to insert into the VMX file of the nested ESXi 5.1a VM (right now where I'm at I only have the AMD settings rather than the Intel settings available to me but it may likely work without any special CPU masks so just keep going).

Make sure you have assigned at least 2 vCPUs and 2GB of memory to the nested ESXi 5.1a VM (required to get ESXi 5.1a to install).

Boot your nested ESXi 5.1a VM and install ESXi 5.1a into the VM.

After the nested ESXi 5.1a VM has had ESXi installed, boot the ESXi 5.1a VM and configure it to your liking.

When you create a Standard vSwitch in the nested ESXi 5.1 VM, if you want the VMs that run on the nested ESXi 5.1a VM to have outside network visibility, you'll need to engage Promiscuous Mode on the Standard vSwitch that holds the VM Port Group as well as the Standard vSwitch that holds the Management Port of the nested ESXi 5.1a VM (it's a choice under Edit and then choose the Security Tab for the Standard vSwitch and change the setting for Promiscuous Mode to Accept from the default of Reject). If you just have one Standard vSwitch for everything (VM, Managaement and VMotion/VMKernel) then you only need to change to Promiscuous Mode on that one Standard vSwitch. If you're using a dvSwitch then you'll need to get Promiscuous Mode engaged on the individual port groups rather than the dvSwitch.

Install your 64 bit Windows VM onto the nested ESXi 5.1a VM.

Remember, none of this is going to work with running x64 Windows VMs on your nested ESXi 5.1a VM unless you have EPT/RVI capability built into your physical CPUs. Note before anyone asks -- no Intel CPU with the word "Core 2" in the model name will have EPT capability. Server based Intel CPUs generally need to be Nehalem family or above in order to have EPT capability. AMD server based CPUs need to be 23xx series Opteron CPUs or higher with at least B3 stepping or higher for the 23xx series Opterons in order to have RVI capability that allows this to work.

Whew, back to the future with ESXi 5.0u1 on the physical server. Where's my Delorean?


Datto

View solution in original post

Reply
0 Kudos
12 Replies
Dave_Mishchenko
Immortal
Immortal
Jump to solution

Check out this document for the settings you need to make this happen - http://communities.vmware.com/docs/DOC-8970.

Reply
0 Kudos
Ethan44
Enthusiast
Enthusiast
Jump to solution

Hi

Welcome to the communities.

Please do small changes as per below & let us know.

go to Edit Settings -> Options -> Guest Operating System choose 'Other' and then choose VMware ESXi 5.x.

"a journey of a thousand miles starts with a single step."
Reply
0 Kudos
xchose
Contributor
Contributor
Jump to solution

Thanks for help, anyway:

I have installed this hosts under VMware ESXi 5.x, without this it was not possible.

And about document bellow, I red it quicly. O found this help on esxi5.0

/etc/vmware/config file on the physical host:

vhv.allow = TRUE

A am not able to try this cause in my host is running part of production enviroment and now I am not able to migrate to another host now. I am waiting for another two hosts for my lab enviroment, but will take some times so I wanna test it in virtualized env.

Thanks Martin

Reply
0 Kudos
a_p_
Leadership
Leadership
Jump to solution

Discussion moved from VMware ESXi 5 to Nested Virtualization

Reply
0 Kudos
Datto
Expert
Expert
Jump to solution

In ESXi 5.1 the correct addition is to put the following:

vhv.enable = TRUE

in the VMX file of the nested ESXi 5.1 VM.

Make sure you don't put it into the /etc/vmware/config file since under ESXi 5.1 that no longer works.

Datto

Reply
0 Kudos
Datto
Expert
Expert
Jump to solution

Also, make sure the nested ESXi 5.1 VMs are shutdown and powered off before putting the above line into the VMX file.

Datto

Reply
0 Kudos
Datto
Expert
Expert
Jump to solution

Ahhh...just saw that you're running ESXi 5.0 on the physical host rather than ESXi 5.1.

Nevermind. My suggestions need ESXi 5.1 running on the physical ESX host.

Datto

Reply
0 Kudos
Datto
Expert
Expert
Jump to solution

Okay, let's try this again.

I just checked this procedure/instructions outlined below using ESXi 5.0U1 623860 installed on the physical ESXi host using ESXi 5.1a 838463 as the nested ESXi VM and then loaded up a Windows 2008 R2 SP1 x64 VM running on the nested ESXi 5.1a VM and the x64 Windows 2008 R2 SP1 VM worked fine (although much slower than if all were running on a physical server of course).

---------

Assuming the required physical CPUs in your physical ESXi 5.0 host have EPT capability (required for Intel CPUs) or RVI capability (required for AMD CPUs), then make sure you have:

vhv.allow = TRUE

inserted into the /etc/vmware/config file of your phyiscal ESXi 5.0 host (this is required in order to get nested ESXi hosts to work but no reboot of the physical ESXi 5.0U1 host should be necessary for this setting to get engaged).

Create a VM for the ESXi 5.1a host but initially choose "Other x64" o/s type and choose E1000E as the NIC type but don't power it on yet.

Once the VM is created and still powered off, change the O/S type to ESX 5.x (which should now be exposed and available to choose).

Also in the ESXi 5.1a VM, under Edit Settings, choose the Option Tab and under CPU/MMU Virtualization Settings choose the bottom choice.

At this point, you may or may not need a special CPU mask from J Mattson here in this community forum to insert into the VMX file of the nested ESXi 5.1a VM (right now where I'm at I only have the AMD settings rather than the Intel settings available to me but it may likely work without any special CPU masks so just keep going).

Make sure you have assigned at least 2 vCPUs and 2GB of memory to the nested ESXi 5.1a VM (required to get ESXi 5.1a to install).

Boot your nested ESXi 5.1a VM and install ESXi 5.1a into the VM.

After the nested ESXi 5.1a VM has had ESXi installed, boot the ESXi 5.1a VM and configure it to your liking.

When you create a Standard vSwitch in the nested ESXi 5.1 VM, if you want the VMs that run on the nested ESXi 5.1a VM to have outside network visibility, you'll need to engage Promiscuous Mode on the Standard vSwitch that holds the VM Port Group as well as the Standard vSwitch that holds the Management Port of the nested ESXi 5.1a VM (it's a choice under Edit and then choose the Security Tab for the Standard vSwitch and change the setting for Promiscuous Mode to Accept from the default of Reject). If you just have one Standard vSwitch for everything (VM, Managaement and VMotion/VMKernel) then you only need to change to Promiscuous Mode on that one Standard vSwitch. If you're using a dvSwitch then you'll need to get Promiscuous Mode engaged on the individual port groups rather than the dvSwitch.

Install your 64 bit Windows VM onto the nested ESXi 5.1a VM.

Remember, none of this is going to work with running x64 Windows VMs on your nested ESXi 5.1a VM unless you have EPT/RVI capability built into your physical CPUs. Note before anyone asks -- no Intel CPU with the word "Core 2" in the model name will have EPT capability. Server based Intel CPUs generally need to be Nehalem family or above in order to have EPT capability. AMD server based CPUs need to be 23xx series Opteron CPUs or higher with at least B3 stepping or higher for the 23xx series Opterons in order to have RVI capability that allows this to work.

Whew, back to the future with ESXi 5.0u1 on the physical server. Where's my Delorean?


Datto

Reply
0 Kudos
xchose
Contributor
Contributor
Jump to solution

Thanks all

adding this to physical esx really helped and now can run 64bit VM on virtualized esxi. Thanks again.

/etc/vmware/config file on the physical host:

vhv.allow = TRUE
Reply
0 Kudos
willrodbard
VMware Employee
VMware Employee
Jump to solution

Hi Datto,

I would like to enter the fray so to speak.

I have the following:

Physical ESXi5.0 u1 UCS blades

2 x nested ESXi5.1 build838462

The nested ESXi 5.1 VMs build and power on fine (were configred exactly like you have mentioned here) and seem to be usable, I can build and deploy 32-Bit VMs fine, however, as soona s I try to deploy a 64-Bit VM, the nested ESXi host goes in to panic mode and PSODs (see attached)

There seems to be a missing flag somewheree, but I can't find out where. I have configred everything as per this post and various other posts, including those of William Lam but its still not working,

Any ideas would be gratefully received

Cheers

Will.

Reply
0 Kudos
admin
Immortal
Immortal
Jump to solution

Don't use vmxnet3 for the outer VMs.  Try using e1000 instead.

Reply
0 Kudos
willrodbard
VMware Employee
VMware Employee
Jump to solution

Hi JMATTSON,

Thanks for the advice, bizarrely I had just found this post : http://communities.vmware.com/message/2196095#2196095

in which you adviseand expain (very well) the very same thing.

I can confirm that after several hours/days of grief this has now resolved my issues

Thanks you very much

Points to you 🙂

Will.

Reply
0 Kudos