VMware Cloud Community
brsolutions
Contributor
Contributor

Connecting to an EMC CX700

Hi,

I'm new to all of these, some please bare wtih me. I have an EMC Clariion CX700 fibre channel SAN hooked up to my new Dell R710 via two Brocade 415 host bus adapters. This is direct connect, as I don't have a switch (and I'm only planning on having this server attached to the CX700). During the installation of VSphere 4.1, I specified the drivers for the HBAs, and they do show up in my list of storage adapters. On the Navisphere side (I am running Navisphere 6.24.3.2.00), no matter what I do, I cannot see the hosts. I've rebooted the SAN and server, and still nothing. Does anyone have any ideas as to what I should try to get the CX700 to see the HBAs?

Thanks,

Chris

0 Kudos
16 Replies
mcowger
Immortal
Immortal

Are you sure you have good link to the hosts? Do the hosts see the Clariion targets? Do you have anything mapped to the hosts?






--Matt
VCP, VCDX #52, Unix Geek, Storage Nerd

9773_9773.gif

--Matt VCDX #52 blog.cowger.us
0 Kudos
brsolutions
Contributor
Contributor

As far as I can tell, I have good links. Link lights are on and reflecting the appropriate link speed. The HBAs do not see the Clariion targets, and I do not have anything mapped to them.

0 Kudos
mcowger
Immortal
Immortal

Try manually mapping something to the HBAs (without doing via the NaviAgent discovered stuff)....you can mask directly to the WWPNs.






--Matt
VCP, VCDX #52, Unix Geek, Storage Nerd

9773_9773.gif

--Matt VCDX #52 blog.cowger.us
0 Kudos
brsolutions
Contributor
Contributor

Okay that's a bit over my head. By manually mapping, I should be doing this through the Navisphere client by putting in the WWPNs for the HBAs and seeing if they attach?

0 Kudos
a_p_
Leadership
Leadership

I don't want to stop your efforts in trying to make this work, however I hope you are aware that the CX700 is not supported for ESX 4.x. It's supported for up to ESX 3.5 U5 and only in "FC Switched" mode.

André

0 Kudos
brsolutions
Contributor
Contributor

Thanks Andre.. I'm okay with the 'unsupported' tag, as this is just for a development / test system. FC Switched mode - is that an ESX settings or a SAN setting?

0 Kudos
a_p_
Leadership
Leadership

FC Switched mode - is that an ESX settings or a SAN setting?

That means you need a FC switch. Direct Connect - as in your setup - is not supported.

André

0 Kudos
mcowger
Immortal
Immortal

Correct.






--Matt
VCP, VCDX #52, Unix Geek, Storage Nerd

9773_9773.gif

--Matt VCDX #52 blog.cowger.us
0 Kudos
brsolutions
Contributor
Contributor

Well that would explain why it's refusing to see the HBAs then. I guess I'll purchase a cheap switch off of EBay for this purpose then. Any quick recommendations for an older switch that would do the job?

0 Kudos
brsolutions
Contributor
Contributor

Just want to perform a final confirmation guys... appreciate the earlier assistance. In order to get VSphere 4.1 to see my CX700, I need to go the following route:

Server -> HBA -> Fibre Channel Switch -> CX700

And in this particular case, while this will work, it is not a supported configuration. But by adding the switch as the 'middleman' in my configuration, Navisphere will now see my HBAs because I'll set up the appropriate zone?

Finally, would a switch such as a Brocade Silkworm 2800 16-Port do the job?

Thanks again

0 Kudos
mcowger
Immortal
Immortal

You do not need the switch at all, and the switch will not fix the issue of not seeing the HBAs in Navisphere.

As ESX 4.1 lacks a naviagent daemon, you likely wont see it auto register with the array, hence my suggesting of manually mapping.






--Matt
VCP, VCDX #52, Unix Geek, Storage Nerd

9773_9773.gif

--Matt VCDX #52 blog.cowger.us
0 Kudos
brsolutions
Contributor
Contributor

Just found out from support on the HBAs that this is an HBA issue (I'm using Brocade 415s). They do not support the direct attached storage, hence why I can't find their WWNs in Navisphere. Based on another user's experience, the CX700 will see the HBAs as long as there is a switch involved.

I did attempt to manually map them, but that doesn't work either... never lets me click the 'OK' button when inputting the details.

To ebay for a cheap switch.

0 Kudos
mcowger
Immortal
Immortal

Not familiar with those HBAs, but perhaps they dont support Loop mode - hence my question about do you have good link Smiley Happy






--Matt
VCP, VCDX #52, Unix Geek, Storage Nerd

9773_9773.gif

--Matt VCDX #52 blog.cowger.us
0 Kudos
brsolutions
Contributor
Contributor

Yep Smiley Happy

From what I could tell I had a good link... oh well. Live and learn (and burn your entire day doing so!).

0 Kudos
ToxicPirate
Contributor
Contributor

The HBA adaptor you have will support NPIV up to 255 virtual hosts. Here are the features of that HBA. Have you checked to make sure that you don't have a speed miss match on the SFP's you are using?

<![endif]><![if gte mso 9]><![endif]><![if !mso]>

<object classid="clsid:38481807-CA0E-42D2-BF39-B33AF135CC4D" id=ieooui> </object> <style>

st1\:*

</style>

<![endif]><![if gte mso 10]>

/* Style Definitions */
table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";}

3. Key Features &Functionality

v

500,000 IOPS per

port for maximum IO transfer rates.

v

1,600 Mbps

throughput per port full duplex.

v

Host Connectivity

Manager (HCM) device management and Brocade Command Line Utility (bcu) tools

v

Management APIs

for integration with Brocade Data Center Fabric Manager (DCFM) and other

management frameworks.

v

N_Port ID

Virtualization (NPIV) to provide 255 virtual ports per physical port.

v

End-to-end

beaconing between an HBA port and switch port to which it connects. (Requires

Brocade Fabric OS 6.1.0a or above)

v

The Brocade HBAs

support the following host and fabric specifications:

v

Small form-factor

pluggable (SFP) optics to provide enhanced serviceability.

v

Eight lanes on

the PCIe connector running at 250 Mhz.

v

Fibre Channel-Security Protocol (FC-SP). This offers device authentication through key

management

v

RoHS-6.

v

Point-to-point

topology.

v

Fabric based boot

LUN discovery

v

FC-SP

(authentication)

v

FC-Ping, FC

Traceroute

v

SFP Diagnostics,

loopback, echo

v

255 Virtual Ports

NPIV (VMware, Linux, Windows)

v Management and APIs

Ø

HCM version 1.0

§ Standalone Centralized

Management

§ Out-of-Band (IP) management

of HBAs

§ Loose integration with EFCM

(SMI-S, Syslog, Call Home). Ability to launch console on a given HBA

  • Supported on Win2003, Win Server 2008, Linux Red Hat 4.0 & 5.0, Linux
    SUSE 9.0 and 10.0, Solaris 10, and VMWare ESX 3.5.

  • Launch HCM from
    EFCM 9.7 (note: the path location needs
    to be corrected to \Program Files\BROCADE\FCHBA
    )

  • HCM/CLI support for
    8Gb HBAs

  • Local/remote management,
    multiple HBAs

  • NPIV with VMware
    Virtual Center

  • In-band (FDMI)
    support

  • SNIA HBA API v2.0, FDMI

v Demo features

Ø

N_Port Trunking,

32 Virtual Channels (QoS), Target Rate Limiting, Direct Attach

v Supported OS

Ø

Microsoft Windows

Server 2003 R2/SP2 (x86, IEM64T, and AMD64), Windows Server 2008 (x86, IEM64T,

and AMD64)

§ Note:

  • Windows server 2003 SP2 requires hotfix 932755 from
    Microsoft, otherwise a System Crash will occur.

  • No support for SP1 or earlier.

Ø

SUSE Linux

Enterprise Server 9.4 (x86, IEM64T and AMD64), SUSE Linux Enterprise Server

10.1 (x86, IEM64T, AMD64, and IA64)

Ø

Red Hat Enterprise Linux 4.6 (x86, IEM64T, AMD64, and IA64), Red

Hat Enterprise

Linux 5.1 (x86, IEM64T, AMD64, and IA64)

Ø

Sun Solaris 10.5

(x86, IEM64T, AMD64), Solaris 10.5 (SPARC)

Ø

VMware ESX 3.5 U2

(x86, IEM64T, AMD64)

0 Kudos
Mau201110141
Contributor
Contributor

Hi Chris,

I had a very similar situation a few months ago where everybody said I could not direct connect my ESX server to my SAN without a switch in the middle. I want to say that I was very discourage at some point when my support from Ardent helped me to check one last chance. We reboot the ESX and went into the QLogic BIOS and found a setting to change. I do not recall exactly the name but we switch from mode 1 to 2 (or vice versa). What happens is that the card was configured to talk only through a switch. When we remove this option the card immediately recognized the target (my SAN) and I got it all working nice, no switches. My ESX is running this way for some 3 months.

Yes, my configuration is not the same as yours (ESX 3.5 and CX500 SAN) but the folks from VMWARE .... they SWEAR this is not supported and will not work.....but it did!

If not too late, I think you could give it a last shot and review the BIOS options on your brocade HBA.

Good luck,

Mau

0 Kudos