VMware Cloud Community
goony83
Contributor
Contributor

iSCSI mount problem

Hello, sorry for the imprecise information and the bad translation, I am French and a beginner and there isn't support in french...

Following network problems (which have been resolved), our VMware servers no longer have access to an iSCSI datastore (on a Synology NAS) and the virtual machines whose data is on this datastore no longer work.
Several reboots did not solve anything.

goony83_4-1668696673667.png

 

 
In Storage > Datastores, there isn't the iSCSI datastore, there is only the local disk
goony83_6-1668697004551.png

 


 

In Storage > Adapters, iSCSI Sofware Adapter is online with iscsi_vmk driver.
goony83_7-1668697041066.png

 

In Storage > Devices, there is the iSCSI Disk
goony83_8-1668697216962.png

 

I hope these SSH commands may help you to understand.
vmkernel.log is in attachment.
 
 
[root@localhost:~] esxcfg-scsidevs --list
mpx.vmhba2:C0:T5:L0
   Device Type: CD-ROM
   Size: 0 MB
   Display Name: Local PLDS CD-ROM (mpx.vmhba2:C0:T5:L0)
   Multipath Plugin: NMP
   Console Device: /vmfs/devices/cdrom/mpx.vmhba2:C0:T5:L0
   Devfs Path: /vmfs/devices/cdrom/mpx.vmhba2:C0:T5:L0
   Vendor: PLDS      Model: DVD+-RW DU-8A5LH  Revis: 6D51
   SCSI Level: 5  Is Pseudo: false Status: on
   Is RDM Capable: false Is Removable: true
   Is Local: true  Is SSD: false
   Other Names:
      vml.0005000000766d686261323a353a30
   VAAI Status: unsupported
naa.60014058cefb151d1315d4214dacccd6
   Device Type: Direct-Access
   Size: 3806040 MB
   Display Name: SYNOLOGY iSCSI Disk (naa.60014058cefb151d1315d4214dacccd6)
   Multipath Plugin: NMP
   Console Device: /vmfs/devices/disks/naa.60014058cefb151d1315d4214dacccd6
   Devfs Path: /vmfs/devices/disks/naa.60014058cefb151d1315d4214dacccd6
   Vendor: SYNOLOGY  Model: iSCSI Storage     Revis: 4.0
   SCSI Level: 5  Is Pseudo: false Status: on
   Is RDM Capable: true  Is Removable: false
   Is Local: false Is SSD: false
   Other Names:
      vml.020000000060014058cefb151d1315d4214dacccd6695343534920
   VAAI Status: unknown
naa.61866da0bff22200211358e004dcd521
   Device Type: Direct-Access
   Size: 571136 MB
   Display Name: Local DELL Disk (naa.61866da0bff22200211358e004dcd521)
   Multipath Plugin: NMP
   Console Device: /vmfs/devices/disks/naa.61866da0bff22200211358e004dcd521
   Devfs Path: /vmfs/devices/disks/naa.61866da0bff22200211358e004dcd521
   Vendor: DELL      Model: PERC H730 Mini    Revis: 4.27
   SCSI Level: 5  Is Pseudo: false Status: on
   Is RDM Capable: true  Is Removable: false
   Is Local: true  Is SSD: false
   Other Names:
      vml.020000000061866da0bff22200211358e004dcd521504552432048
   VAAI Status: unsupported
 
[root@localhost:~] esxcfg-scsidevs -m
naa.61866da0bff22200211358e004dcd521:10                          /vmfs/devices/disks/naa.61866da0bff22200211358e004dcd521:10 59c100d5-3374fa2d-5441-000af79e9eee  0  DATASTORE_ESXI1_CDB
 
[root@localhost:~] ls -l vmfs/volumes/
total 2048
drwxr-xr-x    1 root     root             8 Jan  1  1970 04611f76-df834968-c704-afa5ab15eeff
drwxr-xr-x    1 root     root             8 Jan  1  1970 5980a671-58500426-5f2e-000af79e9eee
drwxr-xr-x    1 root     root             8 Jan  1  1970 5980a678-d088de9d-6cfe-000af79e9eee
drwxr-xr-t    1 root     root         73728 Nov 13  2017 59c100d5-3374fa2d-5441-000af79e9eee
drwxr-xr-x    1 root     root             8 Jan  1  1970 73d8497f-40d54c16-ff7b-77c388bcba3c
lrwxr-xr-x    1 root     root            35 Nov 17 12:55 DATASTORE_ESXI1_CDB -> 59c100d5-3374fa2d-5441-000af79e9eee
 
[root@localhost:~] esxcli vm process list
SCDB4
   World ID: 132078
   Process ID: 0
   VMX Cartel ID: 132077
   UUID: 42 11 74 01 b9 75 99 0a-15 ba 33 6b 63 ce 5a b1
   Display Name: SCDB4
   Config File: /vmfs/volumes/59c100d5-3374fa2d-5441-000af79e9eee/SCDB4/SCDB4.vmx
 
SCDB1
   World ID: 132251
   Process ID: 0
   VMX Cartel ID: 132250
   UUID: 56 4d ef d7 9b 81 aa ce-c8 05 a7 84 f3 8b fb 19
   Display Name: SCDB1
   Config File: /vmfs/volumes/59c100d5-3374fa2d-5441-000af79e9eee/SCDB1/SCDB1.vmx
 
[root@localhost:~] esxcli storage core device world list -d naa.60014058cefb151d1315d4214dacccd6
(no response)
 
[root@localhost:~] esxcfg-volume -l
(no response)

 

 

Est-ce que vous pouvez me donner des pistes, voire une solution ?

Merci d'avance

 

0 Kudos
5 Replies
goony83
Contributor
Contributor

not in the good category, how can i delete this topic ?

0 Kudos
scott28tt
VMware Employee
VMware Employee

As your post needs moving to the area for ESXi, I have reported it to the moderators.

 


-------------------------------------------------------------------------------------------------------------------------------------------------------------

Although I am a VMware employee I contribute to VMware Communities voluntarily (ie. not in any official capacity)
VMware Training & Certification blog
0 Kudos
IRIX201110141
Champion
Champion

Hi @goony83 

from the logs its clear that there was a (network) problem because ESXi reportet a "All path down" event.

Later there is a

2022-11-17T13:47:46.535Z cpu10:65658)ScsiDevice: 5142: Device naa.60014058cefb151d1315d4214dacccd6 is Out of APD; token num:1

Can you please provide the output of

vmware -v

esxcfg-volume -l

Otherwise test if "vmkping <IP_SYNOLOGY>" or when using jumbo frames "vmkping -d -s 8972 <IP_SYNOLOGY>.

and execute

esxcfg-rescan -A

If you open an 2nd. shell you can lurk with "tail -f /var/log/vmkernel.log" during the process.

Regards,
Joerg

0 Kudos
e_espinel
Virtuoso
Virtuoso

Hello.
Run a rescan of HBA and Storage several times, check in the tasks that run and complete without problems.
Check in Datastore to see if any others appear.
Check your ISCSI configuration

 

 

Enrique Espinel
Senior Technical Support on IBM, Lenovo, Veeam Backup and VMware vSphere.
VSP-SV, VTSP-SV, VTSP-HCI, VTSP
Please mark my comment as Correct Answer or assign Kudos if my answer was helpful to you, Thank you.
Пожалуйста, отметьте мой комментарий как Правильный ответ или поставьте Кудо, если мой ответ был вам полезен, Спасибо.
vLabStu
Contributor
Contributor

Just wanted to say massive thanks to this. Had an issue for a while where my SAN would work on a 1gb interface not the 10gb with Jumbp enabled. The switch seemed to be at 9000 mtu, but if I tried the ping with 8972 it failed. worked slightly lower down. Changed MTU on the switch to 9216 (what I thought it was at) and it worked. 

 

 

0 Kudos