Lets keep it simple. I have an IBM bladecenter.
1 blade has 2HBA's
Each HBA goes into a separate switch
I have 2 SANs. Both HDS 1 NCS and one AMS has 2 connections into each switch.
Whatever I had on hba1 I setup the same lun configuration on HBA2 with the specific WWID. Great this worked. The paths showed up. It showed that it had the same canonical path. When I right clicked on any of the luns it would display my active path and my standby path. No issues here
I simply did the same setup with the LUN mappings on my AMS and when I scanned the luns did show up but with different canonical paths and when I right click on a lun it would only show me the one path and not the standby path. Is this OK? Or am I missing a step here.
Here are some SS's
On the AMS there are 4 ports
0A, 0B, 1A, 1B
On both 0 ports I put the 4 WWN's of HBA1
Oh, but we decided to put 0A on SW1 and 0B on SW2
( advice von ngrundy:
SW 0-1 - AMS Port A - Controller 0
SW 0-2 - AMS Port B - Controller 1
SW 1-1 - AMS Port A - Controller 1
SW 1-2 - AMS Port B - Controller 0
)
- so that can not be correct !!
On both 1 ports I put the 4 WWN's of HBA2
the same here-> 1A ->SW2, 1B->SW1
That is the issue I mentioned above (right connections of hbas).
That means the both ports 0 shouldn't have the same hbas (the same on ports 1).
What we should post here were the FC addresses of your SPs.
So I will try again hopefully this weekend. Let me give some more information because I missed one step I think.
Blade 5 – HBA1 | 21:00:00:14:5E:25:38:40 |
Blade 5 – HBA2 | 21:00:00:14:5E:25:38:41 |
Blade 6 – HBA1 | 21:00:00:14:5E:25:4F:E8 |
Blade 6– HBA2 | 21:00:00:14:5E:25:4F:E9 |
Blade 7 – HBA1 | 21:00:00:14:5E:25:25:06 |
Blade 7 – HBA2 | 21:00:00:14:5E:25:25:07 |
Blade 8 – HBA1 | 21:00:00:14:5E:25:4F:D0 |
Blade 8 – HBA2 | 21:00:00:14:5E:25:4F:D1 |
0A - HBA 2 mapped
0B - HBA 1 mapped
1A - HBA 1 mapped
1B - HBA 2 mapped
That looks good - all hbas 1 on the same switch - and all hbas2 on the second?
yes hba1's on switch 0
hba2 on switch 1
So I tried again today and these were my results.
1. When I changed the hostmode to standard I couldn't see any luns. Can only see in open VMS mode
2. For Port 0A and 0B I could see all WWN's of HBA1 and 2 listed
3. For Port 1A and 1B it would allow me to select the WWN's of HBA2 and only 1 WWN from HBA1 Blade 7 – HBA1
21:00:00:14:5E:25:25:06
I did this when all 4 esx servers were powered off. I tried refreshing the san config as well but no luck
Maybe I have to power all the ESX boxes on without any WWN's assigned on the san side then power down and then I would see the correct WWN's??
I think you make a small mistake somewhere, but wih so few information it will be not posible to find it.
As I wrote put here more screenshots from your AMS storage (your host groups, luns, controller configs, etc.) or you try to open a call by Vmware - but that would be a storage issue not Esx.
here are some screenshots
Regards,
Ron Gill, BTech
Systems & Support Lead
Cummins Western Canada
😧 604-882-5787
C: 604-309-9241
christianZ <communities-emailer@vmware.com>
03/14/2009 01:08 PM
To
ron.gill@cummins.com
cc
Subject
New message: "Trying to multipath to Hitachi AMS SAN"
,
A new message was posted in the thread "Trying to multipath to Hitachi AMS SAN":
http://communities.vmware.com/message/1199055#1199055
Author : christianZ
Profile : http://communities.vmware.com/people/christianZ
Message:
I would suggest to start with a simple config - one test lun and mapping through only 0A (switch 0/hba0) and 1A (switch 1/hba1) - as mentioned in these docus here:
www.hds.com/assets/pdf/vmware-multipathing-recommendations-for-hitachi-modular-storage-systems.pdf
They (HDS) only use ports 0A and 1A (or 0B/1B) for multipathing. In addition I see in your config one raid group with some luns - but the luns haven't the same owning controller as the raid group
The "standard" config is mentioned by all sources - so I think you must configure it this way.
Your problem could be that by changing some of configs you will need to "resignature" volumes (your Esx host won't see any datastores any more) - in that case you have to reregister all the vms.