You do not need a vmkernel port for "VM Network" as this is what is blocking that port group from being used.
VM Network is supposed to be a VM Port Group, for connecting the vNICs of VMs to.
By adding a VMkernel port to it you have stopped it from being a VM Port Group, so now you have no VM Port Groups.
What I was attempting to circumvent was the issue that the original NIC on the motherboard doesn't seem to support jumbo frames. So I was aiming to put management on that NIC and everything else on the other two NIC's with jumbo frames and teaming.
Once I put everything back to MTU 1500 and all three NICs on the single vSwitch, for ome reason I'm seeing time out errors in the iSCSI even with just a crossover cable between the NAS and the server. Back to the drawing board then...
If you can (and want to) use the same physical NICs, you can add a VM Port Group to the vSwitch and call it whatever you want - you'll then have a VM Port Group that you can connect VM NICs to.
You can create two virtual switches, one using the embedded NIC as uplink and a VMkernel port for management, and the other one for your iSCSI network using the two adapters that support jumbo frames. While creating the new virtual switch you can create a virtual machine port group to attach the virtual machines adapters, then create a new VMkernel port on that second switch. This will give both your ESXi host and your VMs access to the iSCSI network.
You could use a single virtual switch but you may have to override the default virtual switch failover policy on each VM Port Group and VMkernel to use a different uplink to exit the virtual switch.