Contributor
Contributor

One Datastore disappear ...

Scenario VI 3.5.0 update 4

cluster with 3 Host Esx (esx01, esx02, esx03), 50 Virtual machines total

storage: 7 LUN (datastore name: "san-volume1", "san-volume2", "san-volume3" ...)

Problem:

Esx02 loss and cannot see "san-volume4" (i just refreshed and rescanned storage), all other volumes are visible and ready

- Vmkernel Log

Sep 21 12:15:34 esx02 vmkernel: 212:17:37:17.232 cpu2:1047)SCSI: 672: Blocking queue for device vml.02000000006006016090902100ded0395233b7dd11524149442035 to check for hung SP.

Sep 21 12:15:34 esx02 vmkernel: 212:17:37:17.244 cpu2:1047)SCSI: 672: Blocking queue for device vml.02000000006006016090902100ded0395233b7dd11524149442035 to check for hung SP.

Sep 22 00:01:44 esx02 vmkernel: 213:05:23:26.897 cpu15:1221)VSCSI: 2803: Reset request on handle 8225 (0 outstanding commands)

Sep 22 00:01:44 esx02 vmkernel: 213:05:23:26.897 cpu1:1069)VSCSI: 3019: Resetting handle 8225

- Vmkwarning Log

Sep 21 09:27:26 esx02 vmkernel: 212:14:49:03.649 cpu10:1048)WARNING: FS3: 3460: Failed with bad0004- status : Busy

Sep 21 09:27:26 esx02 vmkernel: 212:14:49:03.649 cpu10:1048)WARNING: Fil3: 1789: Failed to reserve volume f530 28 1 492553f7 cbe950f7 210015da 3e575b5a 0 0 0 0 0 0 0

Sep 21 09:27:27 esx02 vmkernel: 212:14:49:03.822 cpu6:1047)WARNING: FS3: 3460: Failed with bad0004- status : Busy

Sep 21 09:27:27 esx02 vmkernel: 212:14:49:03.822 cpu6:1047)WARNING: Fil3: 1789: Failed to reserve volume f530 28 1 492553f7 cbe950f7 210015da 3e575b5a 0 0 0 0 0 0 0

0 Kudos
4 Replies
Enthusiast
Enthusiast

ask your San team to verify zoning and masking of san-volume4 from thier end and it it is all right then they need to refresh it once.

Or Else they need to setup zoning and masking once again to the wwn of ESX02

Contributor
Contributor

SAN team reply that zoning and masking are ok ...

Follow others useful logs:

# esxcfg-vmhbadevs -m

Skipping dir: /vmfs/volumes/492553f7-cbe950f7-15da-00215a5b573e. Cannot open volume: /vmfs/volumes/492553f7-cbe950f7-15da-00215a5b573e

vmhba1:0:5:1 /dev/sde1 4925511a-5d93d8a0-7c8f-00215a5b573e

vmhba1:0:2:1 /dev/sdb1 49255056-92db6743-01d1-00215a5b573e

vmhba1:0:8:1 /dev/sdh1 4a5b3af0-8aab6e4b-3ee0-00215a5b573e

vmhba1:0:7:1 /dev/sdg1 4a5b3ab9-7a6e0035-091f-00215a5b573e

vmhba1:0:1:1 /dev/sda1 492550e0-adadcf4c-922e-00215a5b573e

vmhba1:0:3:1 /dev/sdc1 492550b7-84cf98fc-1094-00215a5b573e

vmhba1:0:6:1 /dev/sdf1 49b3b8cf-5cc98d55-0a19-00215a5b573e

vmhba0:0:0:5 /dev/cciss/c0d0p5 4769a6c6-ca42f3c0-6787-00215a5b5c76

vmhba1:0:10:1 /dev/sdj1 4a5b3b1b-8b98564c-475f-00215a5b573e

# esxcfg-vmhbadevs -a

vmhba0:0:0 /dev/cciss/c0d0

vmhba1:0:1 /dev/sda

vmhba1:0:2 /dev/sdb

vmhba1:0:3 /dev/sdc

vmhba1:0:4 /dev/sdd

vmhba1:0:5 /dev/sde

vmhba1:0:6 /dev/sdf

vmhba1:0:7 /dev/sdg

vmhba1:0:8 /dev/sdh

vmhba1:0:9 /dev/sdi

vmhba1:0:10 /dev/sdj

vmhba1:1:0 /dev/sdk

0 Kudos
Leadership
Leadership

asks the team to check the mapping of Storage LUNs</span>

*If you found this information useful, please consider awarding points for "Correct" or "Helpful"*

*Please, don't forget the awarding points for "helpful" and/or "correct" answers. *Por favor, não esqueça de atribuir os pontos se a resposta foi útil ou resolveu o problema.* Thank you/Obrigado
0 Kudos
Contributor
Contributor

Issue was no zoning and no masking, with a simply reboot of my Esx and Lun(datastore) reappear correctly.

My ESX was 269 days up and running, in a production environment always remember to mantain one standby ESX or one with few load then the others ... so you can live good without problems and you are able to testing all issue that occours ... learning to say NO to customer !!!

0 Kudos