we have two vmware esx 3 servers connected to an active/passive msa1000, the san switch at the rear of the primary controller failed, we have replaced and verified firmware/config/fabric os are all the same on both san switches, verified both controllers were ok too. both servers see both paths (after rescanning) now that it is using the secondary hba path, how do i fail it back over to the first hba and path.
also as a side note is there any issue with making the first hba path (on both servers )as fixed (vs MRU) so that should a failover occur and after replacing hardware it will automagically fail back over to the primary controller/path. ( somehow this seems way too easy and that there are more steps than just making it a fixed path)
after reviewing the san guide, it says that the MSA1000 should be set to MRU, so i will leave that alone.
Message was edited by:
haynespc
You should be able to log into the service console and use esxcfg-mpath to do this. Just type in the command and it will give you the options and syntax.
You should be able to log into the service console and use esxcfg-mpath to do this. Just type in the command and it will give you the options and syntax.
esxcfg-mpath will allow me to make them enabled/disabled but how do i force it to use one connection vs the other.
Message was edited by:
haynespc
Search for path thrashing here and in the san config guide: http://www.vmware.com/pdf/vi3_301_201_san_cfg.pdf
OR
http://pubs.vmware.com/vi301/san_cfg/esx_san_cfg_manage.8.34.html
Setting an active/passive array to fixed seems like a sure-fired way to shoot yourself in the foot as far as vmware is concerned.
I GOT ANSI AND CALLED SUPPORT, THEY TOLD ME TO FORCE THE FAILOVER I WOULD HAVE TO DISABLE THE SECONDARY PATH VIA THE VI CLIENT AND THIS WOULD FORCE A FAILOVER.
SO IN RETROSPECT, USING ESXCFG-MPATH COMMAND WITH APPROPRIATE PARAMETERS WOULD ESSENTIALLY DO THE SAME THING.
Message was edited by:
haynespc
I was just considering that may be what will do it....
I could disable the current active path using "esxcfg-mpath --path=vmhba0:1:1 --path=vmhba0:1:1 --state=off " then it would failover to the next available path. I could then continue disabling path until i get the active path where I want it. Finally go back and enable all the paths that I disabled. Since this would only need to be done in the event of a ESX Host reboot and that with a reboot the active paths will default to the same active path each time. I should be able to figure out the list of commands necessary to move the paths where I need them. Of course a perl script would get the job done better with a loop and some parameters, but alas.. a perl expert I am not. I actually have a script that was written by someone at EMC and I cannot get it to work. My next thing is to find out that once I fail over all the path if I need to do something in navisphere, maybe make sure all the luns are on the default storage processor. The whole point of this is so that I do not have to go into all 11 of my esx hosts and manual set all the paths for the best load balancing. Does this sound doable with the esxcfg-mpath commands?
Thanks Again,
David
okay... esxcfg-mpath used in conjunction with esxcfg-rescan does what i need..
I have put all the command lines statements into a file balance_paths.sh. Where do I put the file to get it to run after a reboot?
Thank You,
Big D