Hello,
I have the problem, that I am not able to get a VM on our ESXi 6.0.0 to boot over PXE from our WDS if I choose EFI as start option. If I choose BIOS instead, everything works fine.
When I use EFI I get this screen when starting up:
After that I am presented the Boot Manager and can select EFI Network manualy, then I get the following screen. Nothing happens there. After a few seconds I get back to the Boot Manager where I can select another boot media.
Unfortunately I dont have access to our WDS-Server to view any log, but I will ask for this if I cant solve the problem otherwise. If you need any more information feel free to ask.
KR,
Mango
I just ran into a very similar issue. In the end, it was a whole combination of different things that caused the issue. First, using vmxnet3 NICs on the VMs did permit UFI VMs to boot, it just took in excess of five minutes to complete the process. I could never, or never had enough patience, to let e1000 type NICs complete booting. It really looked like they would hang mid-way through the process.
Based on quite a bit of research, I started playing around with the different TFTP options, both in the WDS properties GUI and by creating registry keys to set default values (which I'm not ever sure applies in Server 2016 any more). A lot of what I read indicated that TFTP clients are sensitive to fragmented packets, and also just the nature of TFTP requiring ACKs sent for every frame. I'm operating on a 10g network, and the DHCP, WDS, and client VM are all virtual and hosted on the same vSphere host, so I want to take full advantage of the max speed possible.
Thus, I enabled jumbo frames support on the vSwitch that all of these VMs were connected to (which the physical switches in my environment were already set to), then set WDS' Max Frame size to 8192, which keeps it under the 9000 MTU threshold and thereby prevents fragmentation. Additionally, I disabled the Variable Window negotiation setting and hard-coded a registry value to set that to 8 windows.
My EFI now boots as fast, if not faster than BIOS. And I tried almost every possible combination of different settings to get this working, and finally settled on this.
My setup definitely did not like having the absolute max value of 16384 and the problem persisted until I set WDS Max Frame size to be under the MTU threshold. For most Ethernet networks, the default MTU (and also VMware's default) is 1500, so you should probably try setting the Max Frame rate to 1024 to keep the frames from fragmenting if you want to avoid jumbo frames. Though in our setup, jumbo frames also really increases the speed of the TFTP transfer.
I was having the same problem and changing to VMXNET3 worked for me. Thanks.
Thank you! Switching from E1000 to VMXNET 3 fixed for me. Thanks!!
I just fixed it by disableing NetBios over TCP IP under the tcp ip v4 options -> WINS
Had this issue very often with HyperV
Thanks lsjordan!
For people searching for the code, this is how to do it.
Copy-Item -Path 'C:\Windows\System32\RemInst\boot\x64\wdsmgfw.efi' -Destination 'D:\RemoteInstall\Boot\x64'
Just replace the destination path with your path.
However, despite this seems strange, the destination folder must be x64 and not x64uefi.
If you will change network adapter of your VM from E1000e to e1000 wds + pxe will start normally.
have same problem with wmvare 6.7
Try it without Secure Boot. This worked for me.
"I was having the same problem and changing to VMXNET3 worked for me. Thanks."
I can confirm that his actually worked for me.
All other options mentioned here didn't work in my case. We're on VMware ESXi, 6.7.0, 14320388
I create the VM with a e1000e network adapter and it works fine for me.
After I have installed the operating system I change the network adapter to vmxnet3.
You must change the adapter before you configure the IP address. Otherwise you will lose the configuration you already entered.
i will echo that this issue still exists. on my new cluster, 6.7, it defaults to EFI bios(recomended), but it uses the e1000e network card. It appears that these are not compatible settings for WDS.
Switching the the vmxnet3 completely solves the problem.
i am not using this network card anywhere so i dont know if its stable as the e1000e but i am going to try it out on this new machine i am building.
I found a solution that worked for me instantly. Went into the settings of the VM and switched the Network Connection type from NAT to Bridged. Worked perfectly. I also tried this out using a VMNet connection with the auto bridge option and that worked as well!
Ever find a solution to this? We were working fine and then one day poof it stopped working. Cannot get any support from VMware.
After much troubleshooting, I got PXE over EFI working by deleting the 060 option from the DHCP scope Options and Server options.
I set up a DHCP Scope policy for the PXEClient for 066 and 067
Per Microsoft:
https://learn.microsoft.com/en-us/troubleshoot/mem/configmgr/os-deployment/advanced-troubleshooting-...
Also, afterwards the PXE boot stopped working again and option 060 had returned on the DHCP IPv4 policy, even though I had deleted it. I deleted it again and PXE started working again.