I've got ESX Standard Server 3.5 and am trying to attach iSCSI via a QLA4050c HBA. I have configured the HBA with IP 10.0.0.8 and allocated a LUN in Serveraid to the iqn address of the HBA in VI. The Dynamic and Static discovery tabs have the iSCSI server IP address. When I do a rescan no targets are found. I know that I must be making some kind of boneheaded mistake but can't figure it out.
The firewall by default blocks iscsi. Go into configuration -> security profile and check software iscsi client. It took me about 10 installs before I could remember to do this.
Also, make sure you can use vmkping and ping the SAN. If you changed the initiator node name it required a reboot (on 3.0.x, not sure on 3.5)
@bhadzik - your suggestion only applies when using the SOFTWARE iSCSI INITIATOR. the poster clearly states he is using a hardware HBA.
If you found this or any other post helpful please consider the use of the Helpfull/Correct buttons to award points
Yes, it is a hardware initiator. vmkping gives the following
vmkernel stack not configured!
ping gives the following
PING 10.0.0.144 (10.0.0.144) 56(84) bytes of data.
64 bytes from 10.0.0.144: icmp_seq=0 ttl=64 time=0.698 ms
64 bytes from 10.0.0.144: icmp_seq=1 ttl=64 time=0.290 ms
If I try to ping the IP address of the HBA from another physical machine on the network it times out and does not find it. I also can't connect to the HBA with Sansurfer from another machine.
Normally you should only configure Dynamic discovery target ip address - the static targets will appear automatically.
You can rescan your iscsi hba and check the /var/log/vmkernel file for any warnings/errors. You can check if you can see any iscsi targets in hba bios going into it's boot configuration.
/var/log/vmkernel shows this several times
Jan 15 08:40:32 Artemis vmkernel: 4:13:13:25.372 cpu0:1037)ScsiScan: 395: Path 'vmhba0:C0:T0:L0': Vendor: 'IBM-ESXS' Model: 'DTN018C1UCDY10F ' Rev: 'S29C'
Jan 15 08:40:32 Artemis vmkernel: 4:13:13:25.372 cpu0:1037)ScsiScan: 396: Type: 0x0, ANSI rev: 3
Jan 15 08:40:32 Artemis vmkernel: 4:13:13:25.385 cpu0:1037)ScsiScan: 395: Path 'vmhba0:C0:T1:L0': Vendor: 'IBM-ESXS' Model: 'PYH146C3-ETS10FN' Rev: 'RXQN'
Jan 15 08:40:32 Artemis vmkernel: 4:13:13:25.385 cpu0:1037)ScsiScan: 396: Type: 0x0, ANSI rev: 4
Jan 15 08:40:32 Artemis vmkernel: 4:13:13:25.397 cpu0:1037)ScsiScan: 395: Path 'vmhba0:C0:T8:L0': Vendor: 'IBM ' Model: '25P3495a S320 1' Rev: '1 '
Jan 15 08:40:32 Artemis vmkernel: 4:13:13:25.397 cpu0:1037)ScsiScan: 396: Type: 0x3, ANSI rev: 2
Jan 15 08:40:32 Artemis vmkernel: 4:13:13:25.397 cpu0:1037)ScsiUid: 754: Path 'vmhba0:C0:T8:L0' does not support VPD Serial Id page.
Jan 15 08:40:32 Artemis vmkernel: 4:13:13:25.397 cpu0:1037)ScsiUid: 781: Path 'vmhba0:C0:T8:L0' does not support VPD Device Id page.
Jan 15 08:40:32 Artemis vmkernel: 4:13:13:25.397 cpu0:1037)ScsiScan: 516: Path 'vmhba0:C0:T8:L0': No standard UID: Failure
Jan 15 08:40:39 Artemis vmkernel: VMWARE SCSI Id: Supported VPD pages for vmhba0:C0:T0:L0 : 0x0 0x3 0x80 0x83 0xc4
Jan 15 08:40:39 Artemis vmkernel: VMWARE SCSI Id: Device id info for vmhba0:C0:T0:L0: 0x1 0x3 0x0 0x8 0x50 0x5 0x7 0x67 0x16 0xc2 0xed 0xb4
Jan 15 08:40:39 Artemis vmkernel: VMWARE SCSI Id: Id for vmhba0:C0:T0:L0 0x50 0x05 0x07 0x67 0x16 0xc2 0xed 0xb4 0x44 0x54 0x4e 0x30 0x31 0x38
Jan 15 08:40:39 Artemis vmkernel: VMWARE SCSI Id: Supported VPD pages for vmhba0:C0:T1:L0 : 0x0 0x3 0x80 0x81 0x83 0xc0 0xc4 0x0
Jan 15 08:40:39 Artemis vmkernel: VMWARE SCSI Id: Device id info for vmhba0:C0:T1:L0: 0x2 0x1 0x0 0x20 0x48 0x49 0x54 0x41 0x43 0x48 0x49 0x20 0x48 0x55 0x53
0x31 0x30 0x33 0x30 0x31 0x34 0x46 0x4c 0x33 0x38 0x30 0x30 0x20 0x56 0x35 0x57 0x56 0x54 0x57 0x44 0x41
Jan 15 08:40:39 Artemis vmkernel: VMWARE SCSI Id: Id for vmhba0:C0:T1:L0 0x20 0x20 0x20 0x20 0x20 0x20 0x20 0x20 0x56 0x35 0x57 0x56 0x54 0x57 0x44 0x41 0x50
0x59 0x48 0x31 0x34 0x36
Jan 15 08:40:39 Artemis vmkernel: 4:13:13:31.547 cpu0:1035)WARNING: SCSI: 279: SCSI device type 0x3 is not supported. Cannot create target vmhba0:8:0
Jan 15 08:40:39 Artemis vmkernel: 4:13:13:31.547 cpu0:1035)WARNING: SCSI: 1249: LegacyMP Plugin could not claim path: vmhba0:8:0. Not supported
Jan 15 08:40:39 Artemis vmkernel: 4:13:13:31.547 cpu0:1035)WARNING: ScsiPath: 3180: Plugin 'legacyMP' had an error (Not supported) while claiming path 'vmhba
0:C0:T8:L0'.Skipping the path.
Jan 15 08:40:40 Artemis vmkernel: 4:13:13:32.811 cpu0:1037)qla4010-1: Scanning for new luns....
Jan 15 08:40:40 Artemis vmkernel: 4:13:13:32.811 cpu0:1037)qla4010-1: Scanning for new luns....
Jan 15 08:41:40 Artemis vmkernel: 4:13:14:32.949 cpu3:1036)SCSI: 861: GetInfo for adapter vmhba1, , max_vports=0, vports_inuse=0, linktype=0, sta
te=0, failreason=0, rv=-22, sts=bad0001
Jan 15 08:41:40 Artemis vmkernel: 4:13:14:32.968 cpu3:1036)WARNING: SCSI: 279: SCSI device type 0x3 is not supported. Cannot create target vmhba0:8:0
Jan 15 08:41:40 Artemis vmkernel: 4:13:14:32.968 cpu3:1036)WARNING: SCSI: 1249: LegacyMP Plugin could not claim path: vmhba0:8:0. Not supported
Jan 15 08:41:40 Artemis vmkernel: 4:13:14:32.968 cpu3:1036)WARNING: ScsiPath: 3180: Plugin 'legacyMP' had an error (Not supported) while claiming path 'vmhba
0:C0:T8:L0'.Skipping the path.
What kind of iSCSI target is this again?
Can you post esxcfg-module -q
What storage have you there?
It's an IBM DS300. Hey, stop laughing. We have a low IT budget here.
Hey, I wouldn't be one to laugh. We have an MSA1500.
The thing that bothers me is this:
WARNING: SCSI: 279: SCSI device type 0x3 is not supported. Cannot create target vmhba0:8:0
I'm no iSCSI expert, but it seems to me like you haven't created a LUN.
The LUN should be available. I had one server with 2 LUNs assigned to it. In Serveraid I removed one of the LUNs and assigned it to the IQN name of the HBA on the ESX server. I can see that the drive letter is gone from the server that I moved the LUN from but I am seeing some messages in Serveraid that the old server is still trying to connect to the LUN that I moved. When I can get an outage I'll reboot the old server.
Whao... back the truck up...
I can see that the drive letter is gone from the server that I moved the LUN from
Was this an NTFS partition before?
It might all make sense now. I'm not sure ... but what I'm thinking is this. Since you said "the drive letter is gone" - this screams Windows to me... so if this LUN used to be NTFS, ESX isn't going to want to touch it. Thus the error message you were getting.
If this is the case and you want to use it for ESX, you need to reformat it and let ESX format it VMFS.
----
Carter Manucy
I have exactly the same problem.
3 ESX in a cluster, they all see the VMFS LUN (from a SAN in FC configuration)
I add the storage on node 1 : The new VMFS3 storage appears on node 1 and node 3 but not in node 2
I add the storage on node 2, it delete the storage for node 1 and node 3
Rescan option doesn't resolved the problem, i have the same log as jimcgreen.
cpu2:1047)WARNING: ScsiPath: 3187: Plugin 'legacyMP' had an error (Not supported) while claiming path 'vmhba0:C0:T256:L0'.Skipping the path.
cpu8:1049)SCSI: 861: GetInfo for adapter vmhba0, , max_vports=0, vports_inuse=0, linktype=0, state=0, failreason=0, rv=-1, sts=bad001f
cpu10:1049)<5>megasas: REPORT LUNS to target 100 change lun id 0001 to 0
cpu10:1049)ScsiScan: 395: Path 'vmhba0:C0:T256:L1': Vendor: 'IBM ' Model: 'SAS SES-2 DEVICE' Rev: '02.0'
cpu10:1049)ScsiScan: 396: Type: 0xd, ANSI rev: 3
VMWARE SCSI Id: Supported VPD pages for vmhba0:C0:T256:L1 :
cpu10:1049)VMWARE SCSI Id: Could not get disk id for vmhba0:C0:T256:L1
cpu10:1049)ScsiUid: 754: Path 'vmhba0:C0:T256:L1' does not support VPD Serial Id page.
cpu10:1049)ScsiUid: 781: Path 'vmhba0:C0:T256:L1' does not support VPD Device Id page.
cpu10:1049)ScsiScan: 559: Path 'vmhba0:C0:T256:L1': No standard UID: Failure
cpu10:1049)ScsiScan: 641: Discovered path vmhba0:C0:T256:L1
csauviat wrote:
I have exactly the same problem.
3 ESX in a cluster, they all see the VMFS LUN (from a SAN in FC configuration)
I add the storage on node 1 : The new VMFS3 storage appears on node 1 and node 3 but not in node 2
I add the storage on node 2, it delete the storage for node 1 and node 3
Rescan option doesn't resolved the problem, i have the same log as jimcgreen.
If they're all trying to connect to the same LUN I think the SAN usually defaults to blocking it.
There is usually a setting you'll need to change to allow multiple connections to the same LUN somewhere in the SAN configuration.
Check for it!
-Good Luck (Yes I know it's an old post)