ESX 5.5
AMS 2300
HP Proliant BL460c G7
HP Flexfabric
Getting these every five minutes in vmkernale.log:
2014-05-06T15:02:04.540Z cpu15:33069)lpfc: lpfc_scsi_cmd_iocb_cmpl:2145: 1:(0):3271: FCP cmd x0 failed <0/9> sid x580b01, did x580300, oxid x573 SCSI Reservation Conflict -
2014-05-06T15:02:04.540Z cpu21:33554)lpfc: lpfc_scsi_cmd_iocb_cmpl:2145: 0:(0):3271: FCP cmd x0 failed <0/10> sid x7f0a02, did x7f0300, oxid x237 SCSI Reservation Conflict -
2014-05-06T15:02:04.541Z cpu13:33325)lpfc: lpfc_scsi_cmd_iocb_cmpl:2145: 1:(0):3271: FCP cmd x0 failed <1/10> sid x580b01, did x580200, oxid x57d SCSI Reservation Conflict -
2014-05-06T15:02:04.541Z cpu7:61314)lpfc: lpfc_scsi_cmd_iocb_cmpl:2145: 0:(0):3271: FCP cmd x0 failed <2/9> sid x7f0a02, did x7f0100, oxid x240 SCSI Reservation Conflict -
2014-05-06T15:02:04.541Z cpu7:61314)lpfc: lpfc_scsi_cmd_iocb_cmpl:2145: 0:(0):3271: FCP cmd x0 failed <0/11> sid x7f0a02, did x7f0300, oxid x219 SCSI Reservation Conflict -
2014-05-06T15:02:04.541Z cpu21:33556)lpfc: lpfc_scsi_cmd_iocb_cmpl:2145: 0:(0):3271: FCP cmd x0 failed <0/12> sid x7f0a02, did x7f0300, oxid x8a SCSI Reservation Conflict -
2014-05-06T15:02:04.541Z cpu15:33069)lpfc: lpfc_scsi_cmd_iocb_cmpl:2145: 1:(0):3271: FCP cmd x0 failed <1/9> sid x580b01, did x580200, oxid x578 SCSI Reservation Conflict -
2014-05-06T15:02:04.542Z cpu7:61314)lpfc: lpfc_scsi_cmd_iocb_cmpl:2145: 0:(0):3271: FCP cmd x0 failed <3/9> sid x7f0a02, did x7f0000, oxid x242 SCSI Reservation Conflict -
2014-05-06T15:02:04.543Z cpu15:33069)lpfc: lpfc_scsi_cmd_iocb_cmpl:2145: 1:(0):3271: FCP cmd x0 failed <2/9> sid x580b01, did x580100, oxid x577 SCSI Reservation Conflict -
2014-05-06T15:02:04.543Z cpu15:33069)lpfc: lpfc_scsi_cmd_iocb_cmpl:2145: 1:(0):3271: FCP cmd x0 failed <1/11> sid x580b01, did x580200, oxid x465 SCSI Reservation Conflict -
2014-05-06T15:02:04.543Z cpu15:33069)lpfc: lpfc_scsi_cmd_iocb_cmpl:2145: 1:(0):3271: FCP cmd x0 failed <3/9> sid x580b01, did x580000, oxid x579 SCSI Reservation Conflict -
2014-05-06T15:02:04.544Z cpu13:33325)lpfc: lpfc_scsi_cmd_iocb_cmpl:2145: 1:(0):3271: FCP cmd x0 failed <0/10> sid x580b01, did x580300, oxid x574 SCSI Reservation Conflict -
2014-05-06T15:02:04.544Z cpu13:33009)lpfc: lpfc_scsi_cmd_iocb_cmpl:2145: 1:(0):3271: FCP cmd x0 failed <1/12> sid x580b01, did x580200, oxid x57e SCSI Reservation Conflict -
2014-05-06T15:02:04.544Z cpu7:61314)lpfc: lpfc_scsi_cmd_iocb_cmpl:2145: 0:(0):3271: FCP cmd x0 failed <0/9> sid x7f0a02, did x7f0300, oxid xd0 SCSI Reservation Conflict -
Any idea on how to resolve this matter?
I have seen this in the past, no clue what caused it. Recommend contacting support about it!
Hi boobob,
I am getting similar SCSI-reservation error messages in the logs.where you able to resolve this issue
regards,
Shan
Hello,
SCSI reservation conflicts happen when there is a mismatch in Host LUN ID values of a LUN which has been zoned across multiple storage groups.
Lets say we have a LUN device with ALU 100 (ALU - Array LUN ID, HLU - Host LUN ID) and we have 2 Storage groups SG1 and SG2, with different ESX hosts in each Storage groups (Note: you can have an ESX host in only one storage group, but you can have a LUN in multiple storage groups). If in SG1, you have added this LUN with HLU 100, and in SG2, you have added this LUN with HLU 200, this is a mismatch in HLU for the LUN with ALU 100 and this can cause SCSI reservation conflicts in all the hosts and the array, since the storage processor will receive SCSI commands for the same LUN with different HLU.
In order to correct this, you have to correct the HLU ID of the LUN in either of the SG, change it to either 100 or 200. For this you have to remove the LUN from the SG, and added it back with the correct HLU. This will fix the SCSI reservation conflicts errors. Detailed in EMC Article - 41822
Thanks,
Thor
Did you figure it out? I'm having the same problem
I'm seeing this issue as well. In my instance, I'm seeing this on our HP hosts, but not on our Cisco UCS, both of which are zoned to see the same LUNs. As far as I know, we're not experiencing any issues per se, but I became aware of this after recently implimenting Log Insight.