VMware Cloud Community
michigun
VMware Employee
VMware Employee
Jump to solution

Failed to add vmkernel nic: A specified parameter was not correct. vSphere CLI + ESXi 4

Hi. I try to add vmknic with Jumbo Frames to my distributed virtual switch.

I read the guide vSphere Command-Line Interface Installation and Reference Guide:

"vicfg-vmknic -a -i DHCP "vmk5"

I see error message:

Failed to add vmkernel nic: A specified parameter was not correct.

Vim.Host.VirtualNic.Specification.Ip

What's wrong?

I tried all combinations of parameters with the same result.

-- http://www.vm4.ru/p/vsphere-book.html
Reply
0 Kudos
1 Solution

Accepted Solutions
MarkEwert
Contributor
Contributor
Jump to solution

OK! I figured out a workaround.

Since adding a vmkernel interface remotely doesn't seem to work with DVS, I simply created one on a standard vSwitch and then migrated it to the DVS. And voila: the MTU set to 9000 migrated intact with it! Here are the steps I followed:

1. Put the ESX Host in maintenance mode - evacuate VMs manually if necessary

2. Remove the iSCSI/NFS VMKernel interface from the DVS

3. Create a standard virtual switch with no physical adapters. It does not need to be a VMKernel switch VM Networking is fine. Make a note of the name you gave to the PortGroup created on the new vSwitch.

4. Using the remote command line, add the new vkernel interface specifying an appropriate jumbo MTU (9000) and the PortGroup on the new standard vSwitch. For example:

esxcfg-vmknic.pl --server

If you get 3 replies, everything works. If you get failures, something went awry. Go back and verify the interface is configured properly using the esxcfg-vmknic.pl --list command.

Enjoy!

M

View solution in original post

Reply
0 Kudos
11 Replies
lamw
Community Manager
Community Manager
Jump to solution

You need to specify the portgroup in which the new VMkernel interface will connect to, if you haven't created the portgroup, then you need to create it first.

Try this:

[vi-admin@scofield profile]$  esxcfg-vswitch -A "newVMKernelPortgroup" vSwitch0

and then create the new VMkernel interface:

[vi-admin@scofield profile]$ esxcfg-vmknic -a -i DHCP "vmk5" -p "newVMKernelPortgroup

Added the VMkernel NIC successfully

=========================================================================

William Lam

VMware vExpert 2009

VMware ESX/ESXi scripts and resources at:

Twitter: @lamw

VMware Code Central - Scripts/Sample code for Developers and Administrators

VMware Developer Comuunity

If you find this information useful, please award points for "correct" or "helpful".

Reply
0 Kudos
michigun
VMware Employee
VMware Employee
Jump to solution

It's work fine.

But I told about distributed vSwitch.

-- http://www.vm4.ru/p/vsphere-book.html
Reply
0 Kudos
lamw
Community Manager
Community Manager
Jump to solution

You should take a look at Scott Lowe's post about dvs + Jumbo Frames: http://blog.scottlowe.org/2009/05/21/vmware-vsphere-vds-vmkernel-ports-and-jumbo-frames/

There's a few things you need to do to get that working, it's not so straight forward

=========================================================================

William Lam

VMware vExpert 2009

VMware ESX/ESXi scripts and resources at:

Twitter: @lamw

VMware Code Central - Scripts/Sample code for Developers and Administrators

VMware Developer Comuunity

If you find this information useful, please award points for "correct" or "helpful".

Reply
0 Kudos
michigun
VMware Employee
VMware Employee
Jump to solution

Ok, thank you.

I found interesting thing.

Look:

All work fine in ssh to ESX.

Second:

when I try to repeat this on ESXi with vSphere CLI (on screenshot you see ssh to vMA) I have an error.

-- http://www.vm4.ru/p/vsphere-book.html
Reply
0 Kudos
MarkEwert
Contributor
Contributor
Jump to solution

I am having the same problem with ESXi and the CLI and receiving the same error (cannot specify dvsName or dvPortID). If it matters I am using the Nexus 1000V, not the native distributed virtual switching. I have a lab that I can use to test potential remedies if anyone has any to try.

Reply
0 Kudos
MarkEwert
Contributor
Contributor
Jump to solution

OK! I figured out a workaround.

Since adding a vmkernel interface remotely doesn't seem to work with DVS, I simply created one on a standard vSwitch and then migrated it to the DVS. And voila: the MTU set to 9000 migrated intact with it! Here are the steps I followed:

1. Put the ESX Host in maintenance mode - evacuate VMs manually if necessary

2. Remove the iSCSI/NFS VMKernel interface from the DVS

3. Create a standard virtual switch with no physical adapters. It does not need to be a VMKernel switch VM Networking is fine. Make a note of the name you gave to the PortGroup created on the new vSwitch.

4. Using the remote command line, add the new vkernel interface specifying an appropriate jumbo MTU (9000) and the PortGroup on the new standard vSwitch. For example:

esxcfg-vmknic.pl --server

If you get 3 replies, everything works. If you get failures, something went awry. Go back and verify the interface is configured properly using the esxcfg-vmknic.pl --list command.

Enjoy!

M

Reply
0 Kudos
kmarty009
Contributor
Contributor
Jump to solution

Did you ever get this to work? I am running into the exact same error on ESXi as you did. However, I cannot find anyone who has fixed that error. If you did, could you let me know please...?

Reply
0 Kudos
MarkEwert
Contributor
Contributor
Jump to solution

As far as I know, the only fix is the workaround I detail above. It works with ESXi fine.

Reply
0 Kudos
admin
Immortal
Immortal
Jump to solution

Another workaround that always work is both remove and add the vmkX in unsupported mode.

Reply
0 Kudos
PSWerks
Contributor
Contributor
Jump to solution

Hello.

Is there any way to do this through the Gui.

I have 2 hosts each with 2 nics.

The second host this worked fine but I am unable to do so on the first host

I did it the same way on both and i get the specified parameter is not correct.

I have tried everything I can think of including resetting the management network and then moving the VMKernel to the 2nd nic and still can't move the primary nic to vDS.

Thanks in advance.

Reply
0 Kudos
jdelgadocr
Contributor
Contributor
Jump to solution

The workaround mentioned above is what worked for me:

  • Add a port group to a regular vSwitch: esxcfg-vswitch -A "iSCSI3" vSwitch1
  • add the new vmkernel nic to this portgroup: esxcfg-vmknic -a -i DHCP -p "iSCSI3" -m 9000
  • use the GUI to migrate the new vmk to the desired dvSwitch and adjust the IP configuration

To confirm the membership of the vmk, just run: esxcfg-vswitch -l

To check the mtu of the newly created vmk: esxcfg-vmknic -l

That's all.

Jorge Delgado San José, Costa Rica
Reply
0 Kudos