VMware Cloud Community
91gsixty
Contributor
Contributor

SAN PATH : "The path is marked as 'busy' by the VMkernel"

root@ESX2 ~]# esxcfg-mpath -s off -P vmhba2:C0:T0:L0

Unable to set path state. Error was: Unable to change path state, the path is marked as 'busy' by the VMkernel.

Heres the situation:

600gb vdisk

I added a volume to LUN0, Set the paths to default in the san.
I realized i didn't like the size and removed the volume.
Split vdisk into two volumes.

500gb and 99gb Volumes

Created volumes for both and set them to LUN 0.
The 1st disk, created and mapped. 2nd disk wouldn't path because LUN 0 was already taking (stupid me put it on the same LUN 0)

This is when the ESX alerts come.

Alarm Definition:
( OR
OR
Event alarm
-- expression: Degraded Storage Path Redundancy--)
Event details:
Lost connectivity to storage device
+ naa.600c0ff000d7e9120000000000000000. Path vmhba1:C0:T1:L0 is down. Affected+
+ datastores: Unknown.+

I tried to deactive these luns but get

  • root@ESX2 ~]#* esxcfg-mpath -s off -P vmhba2:C0:T0:L0

Unable to set path state. Error was: Unable to change path state, the path is marked as 'busy' by the VMkernel.

Update: I have since added the 500gb and 99gb to LUN 3 and LUN 4 with no problems. LUN 0 is still being seen by ESX when i don't have these anymore. How can i remove these without interupting my production enviroment.

thanks

jeff

fc.20000000c987cd55:10000000c987cd55-fc.208000c0ffd7eaad:217000c0ffd7eaad-naa.600c0ff000d7e9120000000000000000

Runtime Name: vmhba2:C0:T0:L0

Device: naa.600c0ff000d7e9120000000000000000

Device Display Name: HP Fibre Channel Enclosure Svc Dev (naa.600c0ff000d7e9120000000000000000)

Adapter: vmhba2 Channel: 0 Target: 0 LUN: 0

Adapter Identifier: fc.20000000c987cd55:10000000c987cd55

Target Identifier: fc.208000c0ffd7eaad:217000c0ffd7eaad

Plugin: NMP

State: active

Transport: fc

Adapter Transport Details: WWNN: 20:00:00:00:c9:87:cd:55 WWPN: 10:00:00:00:c9:87:cd:55

Target Transport Details: WWNN: 20:80:00:c0:ff:d7:ea:ad WWPN: 21:70:00:c0:ff:d7:ea:ad

OR

vmhba2:C0:T0:L0 state:active naa.600c0ff000d7e9120000000000000000 vmhba2 0 0 0 NMP active san fc.20000000c987cd55:10000000c987cd55 fc.208000c0ffd7eaad:217000c0ffd7eaad

vmhba1:C0:T1:L0 state:active naa.600c0ff000d7e9120000000000000000 vmhba1 0 1 0 NMP active san fc.20000000c987f29f:10000000c987f29f fc.208000c0ffd7eaad:207000c0ffd7eaad

vmhba1:C0:T0:L0 state:active naa.600c0ff000d7e8690000000000000000 vmhba1 0 0 0 NMP active san fc.20000000c987f29f:10000000c987f29f fc.208000c0ffd7eaad:247000c0ffd7eaad

20 Replies
Pepe_Alhambra
Contributor
Contributor

Thank you 91gsixty.

You put me into the right way. After LUN deletion (not the best deletion on the world, I must confess), I got into a lot of problems, vSphere and ESXIs still could see the disks. I had two paths to a non-existing LUN 0 (and active). No way to remove the paths from the ESXi, tried console command line.... no case.

So, after reading your post, I went to the SAN and rename the LUN presented to the ESXi from 2 to 0 and, after rescan, everything went ok again.

Thanks for sharing.

Best regards,

Pepe

0 Kudos