I'm sure I read somewhere that ESX3.5 does not support an Active Active SP environment? - please can someone verify this.
Will it matter if my EVA SP'S are set to Active : Active, will esx just default to Active : Passive ?
Thanks
I'm currently running ESX 3.5 on an HP EVA4100 and it has active/actice controls working with fixed paths and it's fine.
I'm not aware of any reason why this wouldn't be fully supported.
I'm currently running ESX 3.5 on an HP EVA4100 and it has active/actice controls working with fixed paths and it's fine.
I'm not aware of any reason why this wouldn't be fully supported.
Thanks,
we are moving from an EVA5000 A/P to EVA6000 A/A, A/A is enabled as default on the EVA6
ESX has always (at least for the couple of years I've been involved) had support for Active/Active storage systems. What it didn't have was the ability to load balance storage requests for a single LUN over multiple paths.
What you may have read that triggered your train of thought is that ESX 3.5 now has "experimental" support for this type of load balancing.
The 'Round-Robin' load balancing policy introduced with ESX 3.5 is experimental, but MRU and fixed are fully supported.
Leave your EVA set to A/A. The VMware OS does not use MPIO A/A for load balancing. It will set up for A/P for redundancy. If you leave everything alone, you will not have any problems. Then you can take advantage of A/A for other connected systems.
To clarify: ESX will benefit from Active/Active controllers. If you have two or more LUNs, they can be spread over the two controllers by using the Fixed policy.
I am working with ESX server 3.5 and one of iSCSI target.
I have 2 portal gropus at iSCSI target side and one target device which
is to be exposed to ESX server through vmkernel adapter.
On ISCSI Target side there portalgroup1 = 192.168.10.23:3260 and portalgroup2=192.168.16.23:3260
With target name iqn.1996-02.iscsitarget.com:blockdev1 which to be exposed at esx server side.
There are 2 vimkernel adapters at ESX server side with IP addess 192.168.10.3 and 192.168.16.3
when i add this target through dynamic discovery to the ESX server and
do rescan it will show two tagrets with same name as above.
These two targets has different SCSI paths but same Canonical path.
The O/P of "esxcfg-mpath -l" is as follows.
root@sp-esx-02 root# esxcfg-mpath -l
Disk vmhba33:3:0 /dev/sda (70007MB) has 2 paths and policy of Fixed
iScsi sw iqn.1998-01.com.vmware:sp-esx-02-6953ed79<->iqn.2006-07.nimbusdata:blockdev1 vmhba33:8:0 On active preferred
iScsi sw iqn.1998-01.com.vmware:sp-esx-02-6953ed79<->iqn.2006-07.nimbusdata:blockdev1 vmhba33:10:0 On
I want to know whether this correct behavior? if not then what will be correct behavior?
If yes, then why it shows duplicate tagets for same target ? what will be done to remove it?
please see attached file, which shows duplicate entries for target.
please suggest
In VC, instead on looking at "Storage Adapters", look at "Storage". This will show you the relationship between a LUN and its associated paths. But the "On Active Preferred" and "On" indicate that everything is good.
Thanks for reply.
Yes you are correct, everything is working correct. But my consern is why it is showing duplicate targets? Same things i have tested for MS initiator.
MS Initiator does not show duplicate entries even if i enable MPIO.
please suggest.
The esxcfg-mpath command is showing correctly. When you look at "Storage Adapters" in VC, you are seeing it from the context of the storage adapter. It is giving you the report from /proc.