VMware Cloud Community
pie8ter
Contributor
Contributor
Jump to solution

Long boot time after enabling iSCSI ( ESXi 5.5 )

Our setup:

Esxi 5.5 build (1623387)

SAN:

-Dell PowerVault 3220i  (two RAID controllers with four ports each)

-Two dedicated Powerconect 5224 switches for iSCSI SAN.  There are two VLANs per switch for a total of four VLANs and each VLAN is assigned to its own subnet.   One port from each RAID controller goes to one VLAN on the switch (i.e port0 from both controllers goes to VLAN1 on the switch).  So each ESXi host sees 8 iSCSI targets which means there are 8 paths/LUN.  We set the MPIO for each LUN to round robin and get 4 active paths and 4 standby paths ( I would like to use all 8 paths if there is way).

-Physical NICs are dedicated to iSCSI.  The make and model of the NICs are Broadcom (BCM5719) and they use the tg3 driver.

-MTU is set to 9000 on vmkernel portgroup, vSwitch, all physical switch ports and on the PowerVault 3220i (end to end).

-We have a total of around 50 LUNs.

Ever since I enabled the software iSCSI controller (vmhba##) and bound it to SAN, ESXi takes very long time (>20min) to boot.  The boot time was very quick (< 3min) before enabling iSCSI.  I found others have reported the long boot time issue with iscsi enabled on ESXi5.0.   Apparently, vmware released an update since then.  As you can see my version is 5.5 and it should have the fix.  So....

What's the best way to troubleshoot this issue?

What logs should I check and where are they?

This is what I have done so far.

- I ran the vmkpink [SAN IP numbers] command to all the iSCSI targets and got replies.

-I ran the nc -z <destination-ip> <destination-port> and got the succeeded response.

-I ran the vmkping -s 8972 -d [SAN target] and got a successful reply. So jumbo frame setting works end to end.  The packet round trip time from the esxi host to SAN and back is 1.5msec on average.

Thanks for your help.

0 Kudos
1 Solution

Accepted Solutions
admin
Immortal
Immortal
Jump to solution

0 Kudos
2 Replies
admin
Immortal
Immortal
Jump to solution

0 Kudos
pie8ter
Contributor
Contributor
Jump to solution

Thank you!  I followed the KB article you linked and resolved my problem.

I have two RDM disks.  I followed the KB article and set the perennially-reserved=true for the two disks and now the boot time is less than 5 min.  Thanks

Just in case something happens to the KB article. Here is what I did.

1)Log into the CLI and enter: (for esxi 5.5)

esxcli storage core device list

2) Get the naa.ID for the RDM disks and enter this:

esxcli storage core device setconfig -d [naa.id from step 1] --perennially-reserved=true

3) Reboot.