Hi. I try to add vmknic with Jumbo Frames to my distributed virtual switch.
I read the guide vSphere Command-Line Interface Installation and Reference Guide:
"vicfg-vmknic -a -i DHCP "vmk5"
I see error message:
Failed to add vmkernel nic: A specified parameter was not correct.
Vim.Host.VirtualNic.Specification.Ip
What's wrong?
I tried all combinations of parameters with the same result.
OK! I figured out a workaround.
Since adding a vmkernel interface remotely doesn't seem to work with DVS, I simply created one on a standard vSwitch and then migrated it to the DVS. And voila: the MTU set to 9000 migrated intact with it! Here are the steps I followed:
1. Put the ESX Host in maintenance mode - evacuate VMs manually if necessary
2. Remove the iSCSI/NFS VMKernel interface from the DVS
3. Create a standard virtual switch with no physical adapters. It does not need to be a VMKernel switch VM Networking is fine. Make a note of the name you gave to the PortGroup created on the new vSwitch.
4. Using the remote command line, add the new vkernel interface specifying an appropriate jumbo MTU (9000) and the PortGroup on the new standard vSwitch. For example:
esxcfg-vmknic.pl --server
If you get 3 replies, everything works. If you get failures, something went awry. Go back and verify the interface is configured properly using the esxcfg-vmknic.pl --list command.
Enjoy!
M
You need to specify the portgroup in which the new VMkernel interface will connect to, if you haven't created the portgroup, then you need to create it first.
Try this:
[vi-admin@scofield profile]$ esxcfg-vswitch -A "newVMKernelPortgroup" vSwitch0
and then create the new VMkernel interface:
[vi-admin@scofield profile]$ esxcfg-vmknic -a -i DHCP "vmk5" -p "newVMKernelPortgroup Added the VMkernel NIC successfully
=========================================================================
William Lam
VMware vExpert 2009
VMware ESX/ESXi scripts and resources at:
VMware Code Central - Scripts/Sample code for Developers and Administrators
If you find this information useful, please award points for "correct" or "helpful".
It's work fine.
But I told about distributed vSwitch.
You should take a look at Scott Lowe's post about dvs + Jumbo Frames: http://blog.scottlowe.org/2009/05/21/vmware-vsphere-vds-vmkernel-ports-and-jumbo-frames/
There's a few things you need to do to get that working, it's not so straight forward
=========================================================================
William Lam
VMware vExpert 2009
VMware ESX/ESXi scripts and resources at:
VMware Code Central - Scripts/Sample code for Developers and Administrators
If you find this information useful, please award points for "correct" or "helpful".
Ok, thank you.
I found interesting thing.
Look:
All work fine in ssh to ESX.
Second:
when I try to repeat this on ESXi with vSphere CLI (on screenshot you see ssh to vMA) I have an error.
I am having the same problem with ESXi and the CLI and receiving the same error (cannot specify dvsName or dvPortID). If it matters I am using the Nexus 1000V, not the native distributed virtual switching. I have a lab that I can use to test potential remedies if anyone has any to try.
OK! I figured out a workaround.
Since adding a vmkernel interface remotely doesn't seem to work with DVS, I simply created one on a standard vSwitch and then migrated it to the DVS. And voila: the MTU set to 9000 migrated intact with it! Here are the steps I followed:
1. Put the ESX Host in maintenance mode - evacuate VMs manually if necessary
2. Remove the iSCSI/NFS VMKernel interface from the DVS
3. Create a standard virtual switch with no physical adapters. It does not need to be a VMKernel switch VM Networking is fine. Make a note of the name you gave to the PortGroup created on the new vSwitch.
4. Using the remote command line, add the new vkernel interface specifying an appropriate jumbo MTU (9000) and the PortGroup on the new standard vSwitch. For example:
esxcfg-vmknic.pl --server
If you get 3 replies, everything works. If you get failures, something went awry. Go back and verify the interface is configured properly using the esxcfg-vmknic.pl --list command.
Enjoy!
M
As far as I know, the only fix is the workaround I detail above. It works with ESXi fine.
Another workaround that always work is both remove and add the vmkX in unsupported mode.
Hello.
Is there any way to do this through the Gui.
I have 2 hosts each with 2 nics.
The second host this worked fine but I am unable to do so on the first host
I did it the same way on both and i get the specified parameter is not correct.
I have tried everything I can think of including resetting the management network and then moving the VMKernel to the 2nd nic and still can't move the primary nic to vDS.
Thanks in advance.
The workaround mentioned above is what worked for me:
To confirm the membership of the vmk, just run: esxcfg-vswitch -l
To check the mtu of the newly created vmk: esxcfg-vmknic -l
That's all.