VMware Cloud Community
fun_E_mahn
Contributor
Contributor

Fibre JBOD with ESX 3.01

Hello,

I would like to setup a Fibre JBOD cabinet as a SAN in ESX 3.01 and have not been sucessful despite spending many hours researching and troubleshooting, so any insight would be much appreciated.

Please see below list of pertinent information:

\- The enclosure is directly connected to a QLA2200 Fibre HBA on the host.

\- HBA BIOS (FibreUtil) sees all the drives in the cabinet.

\- HBA is flashed to the latest version.

\- ESX loads the HBA driver sucessfully and VirtualCenter sees the HBA and its WWN, just no LUNs.

\- In the HBA BIOS all the drives are assigned to LUN 0 but have unique Port IDs and WWNs.

\- I installed QLogic's SANsurfer CLI on the host and it sees the drives when doing a device listing, but I am unable to setup persistent bindings... when I attempt to it comes back saying binding failure. (I am thinking because SANsurfer is designed to work with the QLogic driver, not the VMWare QLogic driver).

My objective is for me to have a storage device available in VirtualCenter for each of the drives in the cabinet. Any insights would be greatly appreciated!.

Cheers.

0 Kudos
7 Replies
BUGCHK
Commander
Commander

It is possible that these Fibre Channel disks are missing some identification data that the VMkernel needs to determine if a SCSI LUN is just another path or a different device. Did you already check the logs, e.g. /var/log/messages[/u] ?

0 Kudos
fun_E_mahn
Contributor
Contributor

I came accross a post that seems like it hit the nail on the head:

http://www.vmware.com/community/thread.jspa?hreadID=77407&tstart=0

BUT...... I am now having another issue with the new driver hanging.

I followed the KB artcile prescribed in the above posting (http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&externalId=1560391) and installed the qla2200_7xx driver (I extracted it from the same ESX 3.01 CD used to install ESX), however even I after remove the old qla2200_707 driver via rpm -e, when I do a vmkload_mod -l it still shows the qla2200_707 as being loaded.

So, I then rebooted and it hangs during startup when trying to load the new qla2200_7xx..... after about 5mins the console screen turns colour and its looking for a dump file location.

I have verified the integrity of the driver and validated that my card is indeed the 1Gb QLA2200.

Is there something I have to do on the Linux Kernel Module end of things perhaps?

Any ideas would be great!!

Cheers

0 Kudos
bertdb
Virtuoso
Virtuoso

you'll be advised to try ESX 3.0.2.

Does the SANsurfer software support ESX ? If it doesn't (like: it supports RHEL3), it probably doesn't do any good on the service console.

"the console screen turns color", that's got to be a Purple Screen Of Death (aka PSOD). A VMkernel crash.

0 Kudos
fun_E_mahn
Contributor
Contributor

OK... I resolved my driver issue and now I can see all the drives in VirtualCenter, but only one of them shows as having a capacity...... The rest show as having a 0.00 B capacity.

I can see / access the drive from the host via SANsurfer and mount.

Any ideas?

0 Kudos
oschistad
Enthusiast
Enthusiast

If the LUNs you are exposing to your ESX have a size of 2TB or more, they will be unusable as a VMFS volume. If memory serves me correctly, these are shown as having zero size. Or I may be misremembering..

0 Kudos
fun_E_mahn
Contributor
Contributor

The drives are all 73GB Seagates.

Can anybody provide me with a confirmation that my overall configuration and objectives are plausible:

Configuration:

QLA2200F connected to Dell Powervault 220F cabinet without RAID via copper HSDC cable.

Objective:

To use all drives within cabinet as seperate LUNs. (Even though the HBA sees it all as LUN 0 but each drive has separate Port ID and WWN.)

Any input would be greatly appreciated.

Cheers

0 Kudos
BUGCHK
Commander
Commander

Well, that is correct. Each FC disk's FC port is an individual SCSI target. The addressable entity, however is a LUN, so each individual FC disk drive responds at LUN address 0.

It works the same way on a parallel SCSI bus, by the way.

I would check /var/log/messages and what is says when polling the FC disk drives.

0 Kudos