Hi All
I testing new ESXi on my server Dell R710 with 4 TOE port BroadCom Ethernet adapter.
000:001:00.0 Network controller: Broadcom Corporation PowerEdge R710 BCM5709 Gigabit Ethernet
000:001:00.1 Network controller: Broadcom Corporation PowerEdge R710 BCM5709 Gigabit Ethernet
000:002:00.0 Network controller: Broadcom Corporation PowerEdge R710 BCM5709 Gigabit Ethernet
000:002:00.1 Network controller: Broadcom Corporation PowerEdge R710 BCM5709 Gigabit Ethernet
This ethernet device have support for dependent BroadCom iSCSI Adapter.
when I configure vmk and NIC with vmhba34 - then iSCSI not working
when I configure vmk and NIC with Software iSCSI Adapter iSCSI working for me.
pleas see image:
and esxcli configuration for vmhba34
Does anyone have a similar problem and knows how to solve this?</span>
Tom
You have to adjust the WWN (iSCSI Name) on each of the "Broadcom iSCSI Adapters" to make sure they match what you have on your iSCSI Software Adapter.
Does anyone konw if using the Broadcom iSCSI Adapters like this bypasses the vSwitch functionality? I have 2 iSCSI networks to segment traffic. In previous versions of esx, I had 2 vSwitches setup with one pNIC each "vSwitch_iSCSI1" and "vSwitch_iSCSI2". This way I was able to force traffic onto a specific pNIC and network switch.
You have to adjust the WWN (iSCSI Name) on each of the "Broadcom iSCSI Adapters"
to make sure they match what you have on your iSCSI Software Adapter.
You don't really want to do that. Each Broadcom bnx2i adapter should have its own initiator IQN name.
You are correct that the storage ACLs will need to be set to allow connections from each of the initiator names.
Does anyone konw if using the Broadcom iSCSI Adapters like this bypasses the vSwitch
functionality? I have 2 iSCSI networks to segment traffic. In previous versions of esx, I had
2 vSwitches setup with one pNIC each "vSwitch_iSCSI1" and "vSwitch_iSCSI2". This way
I was able to force traffic onto a specific pNIC and network switch.
The Broadcom adapters, being "dependent" adapters, depend on the ESX vSwitch and vmknic information to configure the Broadcom adapter through the driver. The discovery and authentication are passed through vSwitch, but none of the iSCSI data passes through the vSwitch, since ESX is talking to directly to the iSCSI engine on the adapter, bypassing the vSwitch.
The iSCSI Multipath configuration should take care of pNIC selection for you, since you must enable a single active uplink for each iSCSI port. In this type of configuration, traffic will not use the wrong PNIC.
Andy
Andy-
You have to adjust the WWN (iSCSI Name) on each of the "Broadcom iSCSI Adapters"
to make sure they match what you have on your iSCSI Software Adapter.
You don't really want to do that. Each Broadcom bnx2i adapter should have its own initiator IQN name.
You are correct that the storage ACLs will need to be set to allow connections from each of the initiator names.
Can you explain why? On my other hosts we use multiple bnx2i with the same IQN for multiple connections.
Does anyone konw if using the Broadcom iSCSI Adapters like this bypasses the vSwitch
functionality? I have 2 iSCSI networks to segment traffic. In previous versions of esx, I had
2 vSwitches setup with one pNIC each "vSwitch_iSCSI1" and "vSwitch_iSCSI2". This way
I was able to force traffic onto a specific pNIC and network switch.
The Broadcom adapters, being "dependent" adapters, depend on the ESX vSwitch and vmknic information to configure the Broadcom adapter through the driver. The discovery and authentication are passed through vSwitch, but none of the iSCSI data passes through the vSwitch, since ESX is talking to directly to the iSCSI engine on the adapter, bypassing the vSwitch.
The iSCSI Multipath configuration should take care of pNIC selection for you, since you must enable a single active uplink for each iSCSI port. In this type of configuration, traffic will not use the wrong PNIC.
Andy
Thanks, it sounds like I could have (10 vSwitch for iSCSI with the bnx2i adapters linked to it, (2) vmkernels with custom teaming configuration to map to the appropriate pNIC. Then use esxcli to map the vmk's to the correct vmhba#
Thanks,
~Todd
I have bnx2 driver not bnx2i - this is corrent NIC driver bnx2 ?
I have matrix P300Q-D424 qsan this matrix have cert from vmware hcl
The bnx2 driver is the NIC driver. The bnx2i driver is the iSCSI HBA driver. The bnx2 driver needs to be loaded before bnx2i. The bnx2i driver is only loaded if the NICs support iSCSI offlaod. Since you are seeing the vmhbas with esxcfg-scsidevs -a, your NICs are properly licensed and bnx2i is getting loaded.
I'm not familiar with the Matrix storage devices. Cutting and pasting any iSCSI-related or bnx2i-related messages in the logs could be helpful. An Ethernet trace would be most helpful.
Andy
Hey guys,
I have the same exact setup and can't establish an iSCSI connection.
Server: DELL PowerEdge R710 (BIOS-FW: 2.1.9)
Dependent iSCSI Adapter: Onboard Broadcom BCM5709 Quad-Port adapter with licensed iSCSI
vSphere: 4.1
I configured it like mentioned in the ESX Configuration Guide (page 94 ...) but without success.
I've created a vSwitch with a VMkernel port and bound it to vmnic2 (port 3 on the physical onboard NIC).
When I issue the command "esxcli swiscsi vmnic list -d vmhba34", it says that its bound to vmnic2.
Command: esxcli swiscsi vmnic list -d vmhba34
vmnic2
vmnic name: vmnic2
mac address: b8:ac:6f:84:40:02
mac address settable: NO
maximum transfer rate: 1000
current transfer rate: 1000
maximum frame size: 1500
When I try to define some iSCSI Targets (dynamic and static recovery) it says:
"The host bus adapter is not associated with a vmknic"
I don't know what to do - the configuration seems right to me, but it won't work as described in the VMware config guide.
Maybe I will contact DELL tomorrow.
I was JUST on the phone with dell... check this out:
http://www.delltechcenter.com/page/VMwareESX4.0andPowerVault+MD3000i
I also just opened a ticket with vmware to see if (2) vSwitches are required. We will be using (2) of the 4 Broadcom Dependent HBAs (referenced as HBA from here out). ...each will be on a unique subnet. I'm figuring that I could probably use (1) vSwitch with all HBA's linked, one vmkernel per HBA on the vSwitch with custom NIC Teaming/Fail which would associate the specific physical NIC to the VMKernel, and all other NICs are "unused"
~Todd
All NICS and iSCSI-Adapters are shown in my configuration, but I can't add any targets for discovery.
Do I have to add the targets in the SW-iSCSI initiator or on vmhba34 (BCM Host-Bus-Adapter)
when using dependent iSCSI adapter configuration?
did you attach the vmk## to the vmhba##, see step 9 in the dell document linked above. ...you should also validate the networking using vmkping from a console or ssh session
did you attach the vmk## to the vmhba##, see step 9 in the dell document linked above. ...you should also validate the networking using vmkping from a console or ssh session
Hey tsimons,
thanks for the hint, I got it working now after following step 9, out of the DELL doc you linked in this thread.
Have a nice day.
I've created a vSwitch with a VMkernel port and bound it to vmnic2 (port 3 on the physical onboard NIC).
When I issue the command "esxcli swiscsi vmnic list -d vmhba34", it says that its bound to vmnic2.
Command: esxcli swiscsi vmnic list -d vmhba34
vmnic2
vmnic name: vmnic2
mac address: b8:ac:6f:84:40:02
mac address settable: NO
maximum transfer rate: 1000
current transfer rate: 1000
maximum frame size: 1500
The esxcli swiscsi vmnic command shows you the candidate vmnics for use with the adapter. It does not shows you the bound NICS.
Use the esxcli swiscsi nic command for that.
You can find the candidate vmknics using the esxcli swiscsi vmknic command. So
1. Make sure you have a vmknic with only one active uplink of vmnic2, and no standby uplinks
2. Use "esxcli swiscsi vmknic list -d vmhba34" To make sure ESX considers it a valid candidate vmknic
3. Use "esxcli swiscsi nic add -n vhmk# -d vmhba34" to add the vmknic to the vmhba
4. Use "esxcli swiscsi nic list -d vmhba34" to see the bound NICs.
When I try to define some iSCSI Targets (dynamic and static recovery) it says:
"The host bus adapter is not associated with a vmknic"
Once you've made the association with the "nic add" sub-command, you won't see this any more.
Andy
Hi Andy,
thank you for your useful reply. I got it working by following your instructions. I thought the whole day, that the adapter was bond already
Thank you very much and have a nice day!
My Setup: Dell r805 with iSCSI Offload 5709 / Dell MD3000i / upgrading/redesigning to ESXi v4.1
So... I apparently missed that using the Dependent iSCSI Adapter (BCM5709 aka bnx2i) is not supported with Jumbo Frames.
Since the "new and improved" vSphere iSCSI Software initiator includes better MPIO, and the support for future MPIO 3rd party plugins, I wonder if its better to go with the iSCSI Software initiator instead of the bnx2i
Anyone... any thoughs?
I am currently using BCM5709 with software iSCSI initiator and Jumbo frames, as the Hardware offload and jumbo frames together is not supported would it be best to stick with software initiator and jumbo frames or swap to hardware initiator and no jumbo frames.
For most workloads, using the Broadcom iSCSI initiator is going to use less host CPU than the software initiator with jumbo frames. Keep in mind that either arrangement can fill 1-Gbit links for most interesting block sizes, so CPU efficiency is the only real thing to weigh in terms of performance.
There are a few other factors to pay attention to in specific configurations. The Broadcom initiator is limited to 64 sessions, which might be limiting in some really big configurations. There might be some other configurations that would favor one type over the other.
Andy
Thanks Andy
Do you know how connections are counted? ...by LUN or by NAS IP?
This is a count of sessions, which are paths to targets. On storage that have multiple LUs behind a target, this is not likely to be a problem. On storage with a single LU per target, you might start running into this limitation.
When looking at paths: vmhba34:C0:T3:L0
Any time there's a different channel (C#) or target (T#), it's a separate session. If the C# and T# are the same, and the only difference is the L#, it means those multiple LUs are being accessed by the same session. So you can get a count of the sessions used by counting the number of paths with different channel and target numbers.
So:
vmhba34:C0:T0:L0
vhmba34:C0:T1:L0
vmhba34:C0:T2:L0
vmhba34:C1:T0:L0
vmhba34:C1:T1:L0
vmhba34:C1:T2:L0
Would be six paths using six sessions.
vmhba34:C0:T0:L0
vhmba34:C0:T0:L1
vmhba34:C0:T0:L2
vmhba34:C1:T0:L0
vmhba34:C1:T0:L1
vmhba34:C1:T0:L2
Would be six paths using two sessions.
Andy
Hi all,
Sorry for long response.
I'm have dell R710 with Broadcom adapter with TOE and iSCSI
I see in ESXi somthing this:
~ # esxcfg-nics -l
Name PCI Driver Link Speed Duplex MAC Address MTU Description
vmnic0 0000:01:00.00 bnx2 Up 1000Mbps Full 00:26:b9:8a:ec:bd 1500 Broadcom Corporation PowerEdge R710 BCM5709 Gigabit Ethernet
vmnic1 0000:01:00.01 bnx2 Up 1000Mbps Full 00:26:b9:8a:ec:bf 1500 Broadcom Corporation PowerEdge R710 BCM5709 Gigabit Ethernet
vmnic2 0000:02:00.00 bnx2 Up 1000Mbps Full 00:26:b9:8a:ec:c1 1500 Broadcom Corporation PowerEdge R710 BCM5709 Gigabit Ethernet
vmnic3 0000:02:00.01 bnx2 Up 1000Mbps Full 00:26:b9:8a:ec:c3 1500 Broadcom Corporation PowerEdge R710 BCM5709 Gigabit Ethernet
vmnic4 0000:07:00.00 igb Up 1000Mbps Full 00:1b:21:63:c9:90 1500 Intel Corporation 82576 Gigabit Network Connection
vmnic5 0000:07:00.01 igb Up 1000Mbps Full 00:1b:21:63:c9:91 1500 Intel Corporation 82576 Gigabit Network Connection
vmnic6 0000:08:00.00 igb Up 1000Mbps Full 00:1b:21:63:c9:94 1500 Intel Corporation 82576 Gigabit Network Connection
vmnic7 0000:08:00.01 igb Up 1000Mbps Full 00:1b:21:63:c9:95 1500 Intel Corporation 82576 Gigabit Network Connection
I configure vStwich with 1500 MTu IP adress I Add vmk1 to vmhna34 - this is broadcom dependent iscsi adapter on my ESXi installation
VMhba see portal could discovery my matrix but i can't see lun
Logs from ESXi (I have last update firmware from dell)
Aug 4 15:14:14 Hostd: FetchDVPortgroups: added 0 items
Aug 4 15:14:14 Hostd: FetchDVPortgroups: added 0 items
Aug 4 15:14:14 Hostd: FetchDVPortgroups: added 0 items
Aug 4 15:14:39 shell[9214]: esxcli swiscsi nic add -d vmhba34
Aug 4 15:14:56 Hostd: Default resource used for 'host.SystemIdentificationInfo.IdentifierType.AssetTag.label' expected in module 'enum'.
Aug 4 15:14:56 Hostd: Default resource used for 'host.SystemIdentificationInfo.IdentifierType.AssetTag.summary' expected in module 'enum'.
Aug 4 15:14:56 Hostd: Default resource used for 'host.SystemIdentificationInfo.IdentifierType.ServiceTag.label' expected in module 'enum'.
Aug 4 15:14:56 Hostd: Default resource used for 'host.SystemIdentificationInfo.IdentifierType.ServiceTag.summary' expected in module 'enum'.
Aug 4 15:14:57 Hostd: FetchDVPortgroups: added 0 items
Aug 4 15:15:00 shell[9214]: esxcli swiscsi nic add -d vmhba34 -n vmk1
Aug 4 15:15:03 Hostd: Ticket issued for CIMOM version 1.0, user root
Aug 4 15:15:05 Hostd: Task Created : haTask-ha-host-vim.host.StorageSystem.refresh-40
Aug 4 15:15:05 Hostd: ReconcileVMFSDatastores called: refresh = true, rescan = true
Aug 4 15:15:05 Hostd: RefreshVMFSVolumes called
Aug 4 15:15:05 Hostd: RescanVmfs called
Aug 4 15:15:05 vmkernel: 0:00:10:02.979 cpu20:4586)usb storage warning (0 throttled) on vmhba33 (SCSI cmd READ_CAPACITY): clearing endpoint halt for pipe 0xc0010380
Aug 4 15:15:05 vmkernel: usb storage message on vmhba33: Bulk data transfer result 0x0
Aug 4 15:15:05 vmkernel: usb storage message on vmhba33: scsi cmd done, result=0x2
Aug 4 15:15:05 vmkernel: usb storage message on vmhba33: Bulk command transfer result=0
Aug 4 15:15:05 vmkernel: 0:00:10:02.980 cpu20:4586)usb storage warning (0 throttled) on vmhba33 (SCSI cmd MODE_SENSE): clearing endpoint halt for pipe 0xc0010380
Aug 4 15:15:05 vmkernel: usb storage message on vmhba33: Bulk data transfer result 0x0
Aug 4 15:15:05 vmkernel: usb storage message on vmhba33: scsi cmd done, result=0x2
Aug 4 15:15:05 vmkernel: usb storage message on vmhba33: Bulk command transfer result=0
Aug 4 15:15:05 vmkernel: 0:00:10:02.981 cpu12:4586)usb storage warning (0 throttled) on vmhba33 (SCSI cmd READ_CAPACITY): clearing endpoint halt for pipe 0xc0010380
Aug 4 15:15:05 vmkernel: usb storage message on vmhba33: Bulk data transfer result 0x0
Aug 4 15:15:05 vmkernel: usb storage message on vmhba33: scsi cmd done, result=0x2
Aug 4 15:15:05 vmkernel: usb storage message on vmhba33: Bulk command transfer result=0
Aug 4 15:15:05 vmkernel: 0:00:10:02.982 cpu14:4586)usb storage warning (0 throttled) on vmhba33 (SCSI cmd MODE_SENSE): clearing endpoint halt for pipe 0xc0010380
Aug 4 15:15:05 vmkernel: usb storage message on vmhba33: Bulk data transfer result 0x0
Aug 4 15:15:05 vmkernel: usb storage message on vmhba33: scsi cmd done, result=0x2
Aug 4 15:15:05 vmkernel: usb storage message on vmhba33: Bulk command transfer result=0
Aug 4 15:15:05 vmkernel: 0:00:10:03.004 cpu0:5257)Vol3: 1604: Could not open device 'naa.6a4badb02a4dd20013accf3f3f174b8f:8' for probing: Permission denied
Aug 4 15:15:05 vmkernel: 0:00:10:03.005 cpu0:5257)Vol3: 644: Could not open device 'naa.6a4badb02a4dd20013accf3f3f174b8f:8' for volume open: Permission denied
Aug 4 15:15:05 vmkernel: 0:00:10:03.008 cpu0:5257)Vol3: 1604: Could not open device 'naa.6a4badb02a4dd20013accf3f3f174b8f:6' for probing: Permission denied
Aug 4 15:15:05 vmkernel: 0:00:10:03.009 cpu0:5257)Vol3: 644: Could not open device 'naa.6a4badb02a4dd20013accf3f3f174b8f:6' for volume open: Permission denied
Aug 4 15:15:05 vmkernel: 0:00:10:03.011 cpu0:5257)Vol3: 1604: Could not open device 'naa.6a4badb02a4dd20013accf3f3f174b8f:5' for probing: Permission denied
Aug 4 15:15:05 vmkernel: 0:00:10:03.012 cpu0:5257)Vol3: 644: Could not open device 'naa.6a4badb02a4dd20013accf3f3f174b8f:5' for volume open: Permission denied
Aug 4 15:15:05 Hostd: VmfsUpdate: got VMFS message timestamp=603015850 specific=0 name=
label=
Aug 4 15:15:05 Hostd: RefreshVMFSVolumes: refreshed volume, id 4c2321a6-20f7df5c-d7a1-0026b98aecbf, name datastore1
Aug 4 15:15:05 Hostd: SetVolume: Datastore 4c2321a6-20f7df5c-d7a1-0026b98aecbf has changed provider volume pointer
Aug 4 15:15:05 Hostd: ReconcileVMFSDatastores: Done discovering new filesystem volumes.
Aug 4 15:15:05 Hostd: ReconcileNASDatastores: Discovering new filesystem volumes.
Aug 4 15:15:05 Hostd: RefreshNASVolumes called
Aug 4 15:15:05 Hostd: ReconcileNASDatastores: Done discovering new filesystem volumes.
Aug 4 15:15:05 Hostd: SendStorageInfoEvent() called
Aug 4 15:15:05 Hostd: Task Completed : haTask-ha-host-vim.host.StorageSystem.refresh-40 Status success
Aug 4 15:15:05 Hostd: CreateISCSIHBA
Aug 4 15:15:05 Hostd: CreateISCSIHBA
Aug 4 15:15:05 Hostd: CreateISCSIHBA
Aug 4 15:15:05 Hostd: CreateISCSIHBA
Aug 4 15:15:05 vmkernel: 0:00:10:03.396 cpu14:4586)NMP: nmp_CompleteCommandForPath: Command 0x12 (0x41027f392640) to NMP device "mpx.vmhba33:C0:T0:L0" failed on physical path "vmhba33:C0:T0:L0" H:0x0 D:0x2
P:0x0 Valid sense data: 0x5 0x24 0x0.
Aug 4 15:15:05 vmkernel: 0:00:10:03.396 cpu14:4586)ScsiDeviceIO: 1672: Command 0x12 to device "mpx.vmhba33:C0:T0:L0" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.
Aug 4 15:15:05 vmkernel: 0:00:10:03.401 cpu13:4728)NMP: nmp_CompleteCommandForPath: Command 0x12 (0x41027f392640) to NMP device "mpx.vmhba0:C0:T0:L0" failed on physical path "vmhba0:C0:T0:L0" H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.
Aug 4 15:15:05 vmkernel: 0:00:10:03.401 cpu13:4728)ScsiDeviceIO: 1672: Command 0x12 to device "mpx.vmhba0:C0:T0:L0" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.
Aug 4 15:15:06 vmkernel: 0:00:10:03.482 cpu14:4586)NMP: nmp_CompleteCommandForPath: Command 0x12 (0x41027f392640) to NMP device "mpx.vmhba33:C0:T0:L1" failed on physical path "vmhba33:C0:T0:L1" H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.
Aug 4 15:15:06 vmkernel: 0:00:10:03.482 cpu14:4586)ScsiDeviceIO: 1672: Command 0x12 to device "mpx.vmhba33:C0:T0:L1" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.
Aug 4 15:15:09 Hostd: Looking up object with name = "firewallSystem" failed.
Aug 4 15:15:09 Hostd: FetchFn: List of pnics opted out
Aug 4 15:15:10 Hostd: FetchFn: List of pnics opted out
Aug 4 15:15:10 Hostd: Default resource used for 'host.SystemIdentificationInfo.IdentifierType.AssetTag.label' expected in module 'enum'.
Aug 4 15:15:10 Hostd: Default resource used for 'host.SystemIdentificationInfo.IdentifierType.AssetTag.summary' expected in module 'enum'.
Aug 4 15:15:10 Hostd: Default resource used for 'host.SystemIdentificationInfo.IdentifierType.ServiceTag.label' expected in module 'enum'.
Aug 4 15:15:10 Hostd: Default resource used for 'host.SystemIdentificationInfo.IdentifierType.ServiceTag.summary' expected in module 'enum'.
Aug 4 15:15:18 Hostd: Task Created : haTask-ha-host-vim.host.StorageSystem.addInternetScsiSendTargets-42
Aug 4 15:15:18 iscsid: discovery_sendtargets::Running discovery on IFACE default(bnx2i-0026b98aecbd) (drec.transport=bnx2i-0026b98aecbd)
Aug 4 15:15:18 iscsid: discovery_sendtargets::Running discovery on IFACE bnx2i-0026b98aecbd@vmk1(bnx2i-0026b98aecbd) (drec.transport=bnx2i-0026b98aecbd)
Aug 4 15:15:18 Hostd: SendStorageInfoEvent() called
Aug 4 15:15:18 Hostd: Task Completed : haTask-ha-host-vim.host.StorageSystem.addInternetScsiSendTargets-42 Status success
Aug 4 15:15:18 Hostd: CreateISCSIHBA
Aug 4 15:15:18 Hostd: CreateISCSIHBA
Aug 4 15:15:18 Hostd: CreateISCSIHBA
Aug 4 15:15:18 Hostd: CreateISCSIHBA
Aug 4 15:15:18 vmkernel: 0:00:10:16.397 cpu14:4586)NMP: nmp_CompleteCommandForPath: Command 0x12 (0x41027f392340) to NMP device "mpx.vmhba33:C0:T0:L0" failed on physical path "vmhba33:C0:T0:L0" H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.
Aug 4 15:15:18 vmkernel: 0:00:10:16.397 cpu14:4586)ScsiDeviceIO: 1672: Command 0x12 to device "mpx.vmhba33:C0:T0:L0" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.
Aug 4 15:15:18 vmkernel: 0:00:10:16.402 cpu13:4728)NMP: nmp_CompleteCommandForPath: Command 0x12 (0x41027f392340) to NMP device "mpx.vmhba0:C0:T0:L0" failed on physical path "vmhba0:C0:T0:L0" H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.
Aug 4 15:15:18 vmkernel: 0:00:10:16.402 cpu13:4728)ScsiDeviceIO: 1672: Command 0x12 to device "mpx.vmhba0:C0:T0:L0" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.
Aug 4 15:15:19 vmkernel: 0:00:10:16.482 cpu14:4586)NMP: nmp_CompleteCommandForPath: Command 0x12 (0x41027f392340) to NMP device "mpx.vmhba33:C0:T0:L1" failed on physical path "vmhba33:C0:T0:L1" H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.
Aug 4 15:15:19 vmkernel: 0:00:10:16.482 cpu14:4586)ScsiDeviceIO: 1672: Command 0x12 to device "mpx.vmhba33:C0:T0:L1" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.
Aug 4 15:15:19 Hostd: Looking up object with name = "firewallSystem" failed.
Aug 4 15:15:19 Hostd: FetchFn: List of pnics opted out
Aug 4 15:15:19 Hostd: FetchFn: List of pnics opted out
Aug 4 15:15:19 Hostd: ComputeGUReq took 1318073 microSec
Aug 4 15:15:21 Hostd: SetDone took 1140806 microSec
Aug 4 15:15:26 Hostd: Task Created : haTask-ha-host-vim.host.StorageSystem.rescanHba-43
Aug 4 15:15:26 iscsid: discovery_sendtargets::Running discovery on IFACE default(bnx2i-0026b98aecbd) (drec.transport=bnx2i-0026b98aecbd)
Aug 4 15:15:26 iscsid: discovery_sendtargets::Running discovery on IFACE bnx2i-0026b98aecbd@vmk1(bnx2i-0026b98aecbd) (drec.transport=bnx2i-0026b98aecbd)
Aug 4 15:15:26 iscsid: Login Target: iqn.2010-05.pl.iri:p300q-d424-fff9055d8:dev0.ctr1 if=bnx2i-0026b98aecbd@vmk1 addr=10.1.1.1:3260 (TPGT:0 ISID:0x1)
Aug 4 15:15:26 iscsid: Notice: Assigned (H34 T0 C0 session=1, target=1/1)
Aug 4 15:15:26 iscsid: SessionResolve for 10.1.1.1 (via vmk1) started)
Aug 4 15:15:26 iscsid: Login Target: iqn.2010-05.pl.iri:p300q-d424-fff9055d8:dev1.ctr1 if=bnx2i-0026b98aecbd@vmk1 addr=10.1.1.1:3260 (TPGT:1 ISID:0x1)
Aug 4 15:15:26 iscsid: Notice: Assigned (H34 T1 C0 session=2, target=2/2)
Aug 4 15:15:26 iscsid: SessionResolve for 10.1.1.1 (via vmk1) started)
Aug 4 15:15:26 iscsid: DISCOVERY: Pending=2 Failed=0
Aug 4 15:15:27 vmkernel: 0:00:10:24.547 cpu23:4119)<6>bnx2i::0x41000d404558: : ISCSI_INIT passed
Aug 4 15:15:27 iscsid: DISCOVERY: Pending=2 Failed=0
Aug 4 15:15:28 iscsid: DISCOVERY: Pending=2 Failed=0
Aug 4 15:15:29 iscsid: DISCOVERY: Pending=2 Failed=0
Aug 4 15:15:30 iscsid: DISCOVERY: Pending=2 Failed=0
Aug 4 15:15:31 vmkernel: 0:00:10:28.569 cpu23:4119)<3>bnx2i::0x41000d404558: bnx2i_cm_connect_cmpl: cid 0 failed to connect 10000000
Aug 4 15:15:31 vmkernel: 0:00:10:28.570 cpu23:4119)<3>bnx2i::0x41000d404558: bnx2i_cm_connect_cmpl: cid 1 failed to connect 10000000
Aug 4 15:15:31 iscsid: ep_poll failed rc-1
Aug 4 15:15:31 vmkernel: 0:00:10:28.822 cpu9:4853)bnx2i::0x41000d404558: bnx2i_ep_disconnect: vmnic0: disconnecting ep 0x4100b00221e0 {0, 108000}, conn 0x0, sess 0x0, hba-state 1, num active conns 2
Aug 4 15:15:31 iscsid: Login Failed: iqn.2010-05.pl.iri:p300q-d424-fff9055d8:dev0.ctr1 if=bnx2i-0026b98aecbd@vmk1 addr=10.1.1.1:3260 (TPGT:0 ISID:0x1) Reason: 00040000 (Initiator Connection Failure)
Aug 4 15:15:31 iscsid: Notice: Reclaimed Channel (H34 T0 C0 oid=1)
Aug 4 15:15:31 iscsid: Notice: Reclaimed Target (H34 T0 oid=1)
Aug 4 15:15:31 iscsid: ep_poll failed rc-1
Aug 4 15:15:31 vmkernel: 0:00:10:28.824 cpu9:4853)bnx2i::0x41000d404558: bnx2i_ep_disconnect: vmnic0: disconnecting ep 0x4100b00223d0 {1, 10a400}, conn 0x0, sess 0x0, hba-state 1, num active conns 1
Aug 4 15:15:31 iscsid: Login Failed: iqn.2010-05.pl.iri:p300q-d424-fff9055d8:dev1.ctr1 if=bnx2i-0026b98aecbd@vmk1 addr=10.1.1.1:3260 (TPGT:1 ISID:0x1) Reason: 00040000 (Initiator Connection Failure)
Aug 4 15:15:31 iscsid: Notice: Reclaimed Channel (H34 T1 C0 oid=2)
Aug 4 15:15:31 iscsid: Notice: Reclaimed Target (H34 T1 oid=2)
Aug 4 15:15:31 iscsid: DISCOVERY: Pending=0 Failed=2
Aug 4 15:15:31 Hostd: SendStorageInfoEvent() called
Aug 4 15:15:31 Hostd: ReconcileVMFSDatastores called: refresh = true, rescan = false
Aug 4 15:15:31 Hostd: RefreshVMFSVolumes called
Aug 4 15:15:31 Hostd: CreateISCSIHBA
Aug 4 15:15:31 Hostd: RefreshVMFSVolumes: refreshed volume, id 4c2321a6-20f7df5c-d7a1-0026b98aecbf, name datastore1
Aug 4 15:15:31 Hostd: SetVolume: Datastore 4c2321a6-20f7df5c-d7a1-0026b98aecbf has changed provider volume pointer
Aug 4 15:15:31 Hostd: ReconcileVMFSDatastores: Done discovering new filesystem volumes.
Aug 4 15:15:31 Hostd: Task Completed : haTask-ha-host-vim.host.StorageSystem.rescanHba-43 Status success
Aug 4 15:15:31 Hostd: CreateISCSIHBA
Aug 4 15:15:32 Hostd: CreateISCSIHBA
Aug 4 15:15:32 Hostd: CreateISCSIHBA
Aug 4 15:15:32 vmkernel: 0:00:10:29.577 cpu13:4586)NMP: nmp_CompleteCommandForPath: Command 0x12 (0x41027f3f8640) to NMP device "mpx.vmhba33:C0:T0:L0" failed on physical path "vmhba33:C0:T0:L0" H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.
Aug 4 15:15:32 vmkernel: 0:00:10:29.577 cpu13:4586)ScsiDeviceIO: 1672: Command 0x12 to device "mpx.vmhba33:C0:T0:L0" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.
Aug 4 15:15:32 vmkernel: 0:00:10:29.582 cpu13:4728)NMP: nmp_CompleteCommandForPath: Command 0x12 (0x41027f3f8640) to NMP device "mpx.vmhba0:C0:T0:L0" failed on physical path "vmhba0:C0:T0:L0" H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.
Aug 4 15:15:32 vmkernel: 0:00:10:29.582 cpu13:4728)ScsiDeviceIO: 1672: Command 0x12 to device "mpx.vmhba0:C0:T0:L0" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.
Aug 4 15:15:32 vmkernel: 0:00:10:29.663 cpu15:4586)NMP: nmp_CompleteCommandForPath: Command 0x12 (0x41027f3f8640) to NMP device "mpx.vmhba33:C0:T0:L1" failed on physical path "vmhba33:C0:T0:L1" H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.
Aug 4 15:15:32 vmkernel: 0:00:10:29.663 cpu15:4586)ScsiDeviceIO: 1672: Command 0x12 to device "mpx.vmhba33:C0:T0:L1" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.
Aug 4 15:15:33 Hostd: SetDone took 1414788 microSec
Aug 4 15:15:47 ntpd[5018]: Listening on interface #4 vmk1, 10.1.1.250#123 Enabled
I have certificate Matrix from vmware, model P300Q-D424 from Qsan.
What I'm doing wrong?
vmnic0 is dirrectly connected to first LAN on Matrix LAN1 have IP 10.1.1.1/24 matrix have 1500MTU
Tom