Have you been able to use vimsh to change standby NICs into active NICs?
I can use vimsh to set NICs from active to standby, but I can’t do the reverse: take a standby NIC and make it active
Each time I try to make a standby vmnic active, I’m greeted with “A specified parameter was not correct”
Thank you,
Jas
[i]Jason Boche[/i]
[VMware Communities User Moderator|http://communities.vmware.com/docs/DOC-2444][/i]
Hi Jason,
Ive followed your commands above, and it does seem to work when you add another nic to the switch.
~ Just messing about however, if you a 3rd nic, that goes into standby... vimsh is a bit strange...
My build scripts have always added 2 nics as 'active' without any of the commands above.
The networking guys wanted me to change it to active/standby hence the research onto this command.
I have tried adding the nics normally via ssh on a running esx box and it does do what you say, put the extra nic into standby.
So my question is, have you tried just the add nic command in a build and or just ssh onto a running box?
Im gunning for those 10points dude.. hahaa
hey jason,
Ive been able to do this no problem. I have noticed that I've had to include all nics on a vswitch into the command for it to work 100%.
Both
vimsh -n -e "hostsvc/net/vswitch_setpolicy --nicorderpolicy-active=vmnic0 --nicorderpolicy-standby=vmnic1 vSwitch0"
vimsh -n -e "hostsvc/net/vswitch_setpolicy --nicorderpolicy-active=vmnic1 --nicorderpolicy-standby=vmnic0 vSwitch0"
have worked for me.
Oh and dont forget the "service mgmt-vmware restart and sleep 20 secs" beforehand.
Good Luck
Thanks Yattong!
Wow what a screwy tool vimsh is. I’ve been working with it the past 5 hours trying to get a few more build items automated.
[i]Jason Boche[/i]
[VMware Communities User Moderator|http://communities.vmware.com/docs/DOC-2444][/i]
hey jason,
Ive been able to do this no problem. I have noticed that I've had to include all nics on a vswitch into the command for it to work 100%.
Both
vimsh -n -e "hostsvc/net/vswitch_setpolicy --nicorderpolicy-active=vmnic0 --nicorderpolicy-standby=vmnic1 vSwitch0"
vimsh -n -e "hostsvc/net/vswitch_setpolicy --nicorderpolicy-active=vmnic1 --nicorderpolicy-standby=vmnic0 vSwitch0"
have worked for me.
Oh and dont forget the "service mgmt-vmware restart and sleep 20 secs" beforehand.
Good Luck
In this case, I have 2 vmnics assigned to vm_switch. One is active, one is standby. I want both to be active. I want none to be standby. I could not figure out how to solely use vimsh to accomplish this. Every variation I tried resulted in an error.
Do you have a bench tested vimsh command that would accomplish this task? I couldn't figure it out. The changing and refreshing of the policy, and then linking the vmnics afterwards seemed to be the key and that's what Xtravirt tells us.
[i]Jason Boche[/i]
[VMware Communities User Moderator|http://communities.vmware.com/docs/DOC-2444][/i]
Hi Jason,
Ive followed your commands above, and it does seem to work when you add another nic to the switch.
~ Just messing about however, if you a 3rd nic, that goes into standby... vimsh is a bit strange...
My build scripts have always added 2 nics as 'active' without any of the commands above.
The networking guys wanted me to change it to active/standby hence the research onto this command.
I have tried adding the nics normally via ssh on a running esx box and it does do what you say, put the extra nic into standby.
So my question is, have you tried just the add nic command in a build and or just ssh onto a running box?
Im gunning for those 10points dude.. hahaa
You'll have to settle for 16 points instead of 10 points.
I'm going to do a little more experimenting using the newer ESX 3.5.0 vmware-vim-cmd wrapper as suggested by Gavin from Xtravirt. Basically the simple method you guys are talking about just isn't working with my test box for some unknown reason 😐 At least I have a complex scripted workaround.
Thanks again,
Jas
[i]Jason Boche[/i]
[VMware Communities User Moderator|http://communities.vmware.com/docs/DOC-2444][/i]
I leveraged a simple loop command to wait for the mgmt-vmware service.
\## mgmt-vmware service seems to take a while to initialize, wait patiently for it.
until \[ $(vmware-vim-cmd /hostsvc/runtimeinfo | grep -vc "Failed to connect") -ge 1 \]
do
logger "post-install: Sleeping for 5 seconds waiting on mgmt-vmware service..."
sleep 5
done
I also configure my my vSwitch0 like this to avoid the standby issue.
\## Delete existing vswif0, vSwitch0:
esxcfg-vswif --del vswif0
esxcfg-vswitch --delete vSwitch0
\## Create vSwitch0, configure uplink vmnic0, vmnic1:
esxcfg-vswitch --add vSwitch0:256
esxcfg-vswitch --add-pg="Service Console" vSwitch0
esxcfg-vswitch --pg="Service Console" --vlan=108 vSwitch0
esxcfg-vswitch --pg="Service Console" --add-pg-uplink vmnic0 vSwitch0
esxcfg-vswitch --pg="Service Console" --add-pg-uplink vmnic1 vSwitch0
esxcfg-vswitch --link vmnic0 vSwitch0
esxcfg-vswitch --link vmnic1 vSwitch0
This is all with 3.5 of course, some of the commands were not available in 3.0 if I recall correctly. The standby mode bug seems new to 3.5, but the --add-pg-uplink steps before the --link steps solved the issue in my environment.
Stumpr, your reply is very helpful, however, I will admit I am not a fan of your second script because you are configuring your port group(s) to override all virtual switch settings. It accomplishes the objective but with a caveat.
Thank you,
Jas
[i]Jason Boche[/i]
[VMware Communities User Moderator|http://communities.vmware.com/docs/DOC-2444][/i]
Sorry, I should have prefaced it in more detail.
Those are snippets from a post-install script I create in a Kickstart %post process.
If you are doing this after an installation, I would do a few things:
1. Make sure the box is in maintenance mode. You can do this with something like:
if \[ $(vmware-vim-cmd /hostsvc/runtimeinfo | grep -c "inMaintenanceMode = true") -eq 1 ]
do
done
2. Unlink your vmnics, relink them as I did.
However, I woudn't be comfortable with that unless I had some out of band access to the vmware host console if it didn't work. You might want to use the vimsh options above, though I've never tested them. I did run into some mention of them when I ran into the standby bug during a kickstart install, which the solution I have works well for. But I'm also assuming I don't have an existing vSwitch or vswif interface layout to retain.
Sort of figured you were doing this in a kickstart install. Didn't think you might just be adding new nics via a script post installation.
You could fool around with the add pg uplinks. For some reason I had issues when I tried to just add the second uplink nic even if I did the pg-uplink first. But I didn't really thoroughly test it to find out if another method would have worked.
stumpr.
Well the script I posted above isn't working anymore. Additional NICs are once again going into standby. Back to the drawing board.
[i]Jason Boche[/i]
[VMware Communities User Moderator|http://communities.vmware.com/docs/DOC-2444][/i]
After another hour of testing, this script has been working well.
I borrowed a shortcut on line 1 from Stumpr to create the switch and set the number of ports all in one shot.
I compared my non-working script with Steve Beaver's working script to flesh out the rest.
I've changed switch names, IP addresses, and VLAN numbers to protect the innocent
esxcfg-vswitch -a switch_for_vms:128 #creates a vswitch with 120 ports
esxcfg-vswitch -L vmnic2 switch_for_vms #link vmnic2 for all servers
esxcfg-vswitch -L vmnic3 switch_for_vms #link vmnic3 for BL45P blade servers
esxcfg-vswitch -L vmnic4 switch_for_vms #link vmnic4 for DL585 servers
esxcfg-vswitch -A 172.26.150.0_network switch_for_vms
esxcfg-vswitch -p 172.26.150.0_network switch_for_vms -v 199 #vlan 199
esxcfg-vswitch -U vmnic2 switch_for_vms #unlink vmnic2 for all servers
esxcfg-vswitch -U vmnic3 switch_for_vms #unlink vmnic3 for BL45P blade servers
esxcfg-vswitch -U vmnic4 switch_for_vms #unlink vmnic4 for DL585 servers
esxcfg-vswitch -L vmnic2 switch_for_vms #link vmnic2 for all servers
esxcfg-vswitch -L vmnic3 switch_for_vms #link vmnic3 for BL45P blade servers
esxcfg-vswitch -L vmnic4 switch_for_vms #link vmnic4 for DL585 servers
esxcfg-nics -a vmnic2 #set vmnic2 auto/auto
esxcfg-nics -a vmnic3 #set vmnic3 auto/auto
esxcfg-nics -a vmnic4 #set vmnic4 auto/auto
service mgmt-vmware restart
sleep 20
vimsh -n -e "hostsvc/net/vswitch_setpolicy --nicorderpolicy-active vmnic2 switch_for_vms" #set policy for all servers
vimsh -n -e "hostsvc/net/vswitch_setpolicy --nicorderpolicy-active vmnic2,vmnic3 switch_for_vms" #set policy for BL45P blade servers
vimsh -n -e "hostsvc/net/vswitch_setpolicy --nicorderpolicy-active vmnic2,vmnic4 switch_for_vms" #set policy for DL585 servers
[i]Jason Boche[/i]
[VMware Communities User Moderator|http://communities.vmware.com/docs/DOC-2444][/i]
Hi Jason,
I don't know what is so different in your setup.
I have done exactly the same steps, but my script still fails with the: "A specified parameter was not correct." message.
I starting to loose it ....X-(
I get the impression that VMware has forgotten to mention "something" with this vimsh command.
Regards
VIMSH is a fickle beast. The script I spent many hours on and proved to work on several boxes failed the other day on a server having identical hardware. Doesn't make sense to me. I've given up on troubleshooting it for now because the hours I've already spent on it are eating away at the return benefit of scripting, automation, and consistency to the point that I will eventually have spent more time on troubleshooting the scripting than what I would have spent performing the steps manually each time during an installation.
[i]Jason Boche[/i]
[VMware Communities User Moderator|http://communities.vmware.com/docs/DOC-2444][/i]
I am starting to wonder.
I made these scripts also at another customer site to use it with Altiris, deploying ESX 3.5.
I remember deploying the server succesfully with 2 nics per vswitch, both active after install.
One thing ... I didn't copied the scripts yet ... so I am inventing it again at a different customer site.
The only thing that differs from on this site is: I am using the ESX 3.5 Update 1 installation.
I have not tested it yet with the latest patches.
Did you ?
Regards,
Martijn
Take a look at my post earlier. I've been using the pg uplink options to get the NICs into full active mode (not standby). I didn't spend much time on it, but I recall having some issue with vimsh nic policy setting. However, doing the vSwitch creation as I posted earlier has been 100% reliable across several dozen servers in 3 environments for me.
However, this is from an install point of view. You don't have to delete the vSwitch, but you do have to unlink all the physical nics. I always have out of band management in most of my installs, so I'm never out of it. But after testing, I have had 0 issues with loss of connectivity. Of course, these steps aren't something you do with running VMs. Maintenance mode or fresh installs.
Not sure I follow you. What do you mean by overriding the virtual switch settings? Do you mean b/c I'm creating the vSwitch from scratch? Are you doing this in an install process or post install?
Jason,
Any chance your vimsh commands were failing b/c the vm mgmnt service hadn't fully initialized? I ended up using a wait loop in my post install scripts since the timing of that service was pretty unpredictable. I found in my testing that vimsh / vmware-vim-cmd would return with no data.
Jason,
Any chance your vimsh commands were failing b/c the vm mgmnt service hadn't fully initialized? I ended up using a wait loop in my post install scripts since the timing of that service was pretty unpredictable. I found in my testing that vimsh / vmware-vim-cmd would return with no data.
I don't know why it fails when I don't expect it to fail. If you look at my final version of the script above, I have the 'sleep 20' embedded where a timeout is needed.
I'll second the theory that there may be differences in vimsh between different ESX builds.
Overall, I find vimsh very powerful and will ultimately prefer it over sed and other esxcfg-xxx commands if vimsh can mature some more and encompass all ESX configuration touch points.
Jas
[i]Jason Boche[/i]
[VMware Communities User Moderator|http://communities.vmware.com/docs/DOC-2444][/i]
I'll be honest, in some of my installs 20 seconds just isn't enough (seen upwards of 60+). I really got tired of trying to find the "perfect sleep timer" so I just created a loop statement to keep checking until the mgmt-vmware service is actually fully initialized. It starts immediately, but the vimsh/vmware-vim-cmd won't be useful until the full vim environment is initialized. Try something like this snippet below that so far has worked for me. You can remove the logger line, I just like a little more info in /var/log/messages to debug my post install script. You can also adjust the sleep statement to 1 or 2 if you want a tighter wait loop.
\## mgmt-vmware service seems to take a while to initialize, wait patiently for it.
until \[ $(vmware-vim-cmd /hostsvc/runtimeinfo | grep -vc "Failed to connect") -ge 1 \]
do
logger "post-install: Sleeping for 5 seconds waiting on mgmt-vmware service..."
sleep 5
done
If that doesn't fix the vimsh items, I'm not sure what else you can do. I ended up using the -M (add pg uplink) option to esxcfg-vswitch (I believe these were added in 3.5). I will have to say its worked well for me. I would prefer esxcfg-* tools over sed, but I will confess to having some sed's in my custom RPMs and in a few other simple scripts .
Thanks for that script snippet. I do prefer that over the sleep 20. As it turns out, my older (slower) boxes were taking longer than 20 seconds for the mgmt-vmware service to come back up and the script exposed that.
In the future I have to remember to get my VIMSH scripts converted to the more preferred/proper VMWARE-VIM-CMD wrapper.
Jas
[i]Jason Boche[/i]
[VMware Communities User Moderator|http://communities.vmware.com/docs/DOC-2444][/i]