I've 3 Nics of a Dell R610 running ESXi 4.1 connecting to all 8 nics in the MD3200i SAN.
Each San nic has a seperate subnet, and I've set up a VMK for each port on the host for each port on the san, and restricted the vmk to use only one nic on the host.
All was working fine (except the long boot, I'll come to later) until last night I think one of the VMs got busy and this appeared to slow down the whole host to the point it needed rebooting (couldn't kill the running VMs) - It appeared to be SAN related as activity on this went crazy.
My hunch was that it was getting busy on one path and then when it tried to use the next, something went awry and so why I think it's related to the long boot time:-
When booting the host it gets stuck on each of these for about 10mins each:
iscsi_vmk loaded successfully
vmw_satp_lsi loaded successfully
cbt loaded successfully
Once the last one has come up I can SSH into the box and I can see in the messages lots of these coming through each second:
Jan 23 21:20:17 iscsid: Notice: Assigned (H41 T0 C23 session=50c, target=7/2c)
Jan 23 21:20:17 iscsid: cannot make a connection to 192.168.37.107:3260 (101,Network is unreachable)
Jan 23 21:20:17 iscsid: Notice: Reclaimed Channel (H41 T0 C23 oid=7)
Jan 23 21:20:17 iscsid: session login failed with error 4,retryCount=0
What I suspect is happening is that every VMK is trying to talk to every IP on the SAN - Which they can't as they're on diff subnets. I've tried to
change the Scsi.CRTimeoutDuringBoot value to 5000, but makes no difference.
In the iSCSI Software adapter I do see too many paths here too, and not all have a target.
Sorry for the long post but to cut a long story short, is there a way to permanently disable these paths? ... or anything other pointers you can give me to help?
Voritgern, Did you manage to solve this. Im experiencing the same issues with my setup at the moment except my delay is long as is have SW iSCSI multipathing set up.
Can anyone shead some light on this?
Regards
Zane.
I reinstalled it all but putting everything on the same subnet so there wasn't all the VMKs which were hanging waiting for a timeout.
i have the same extreme boot times on my new ESXi5 install. did you ever solve this?
It seems MD3000i is no longer on vSphere 5.0 HCL.
the MD3200i is. the 3000 is the older one
Sorry. Did you update the firmware ?
It's on the latest which is also listed on HCL
Sent from my iPhone
I am running esxi 5.0 on two Dell R710 connected on a md3220i and I get extreme boot times (2 hours...). I hope someone fixes this.
same issue here with esxi 5.0, one Dell R710 connected on a md3200i.
rebooted the esx and already more then 30 minutes the message "isci-vmk loaded successfully"
-> reboot withou md3200i connected -> same problem.
same issue.... just happened to my ESXi 5
rebooted the esx and already more then 30 minutes the message "isci-vmk loaded successfully
Check this out, this might help:
http://www.yellow-bricks.com/2011/11/06/resolved-slow-booting-of-esxi-5-0-when-iscsi-is-configured/
applied patch ESXi500-201111001.zip and still long time delay hang at iscsi_vmk load.
is there any other patch i need?
For me, the patch installed from the update manager fixed the issue. Here is the KB article to help you out.
Just to confirm - this latest patch fixed my exact same long boot issues with my Dell R710's and MD3200i SAN. Patch applied via Update Manager in vSphere
Hi,
Just yesterday i upgrade my both clustered DELL R805 from ESX 4.1.0-259021 to ESXi 5.0.0-456005 and during final boot i have similar situation. Servers are connected by iSCSI to DELL MD3000i storage with two vmkernel ports with different subnets, each.
One strange thing was happened: i made upgrade by Update Manager vcenter 5, and look at DRAC console when vmw_satp_lsi loaded successfully appear:
after 10 minutes and started to affraid so went to my server and just closed DVD drive (it was empty, and opened automatically during upgrade ESX), after this server continue booting and starts properly.
I wouldn't write about it cause it could be just incident but i had the same situation on both servers! Realy strange :smileyconfused:.
Regards
Kris
Hi Kris,
Have you tried this?
I think VMware should only put one downloadable ISO which includes the iSCSI Fix instead of confusing the people with two ISOs.
I have upgraded from ESX4.1 to ESXi5.0 using theESXi 5.0 ISO image for systems without software iSCSI configured. Now, I have to upgrade the ESXi5.0 itself to this version which fix the iSCSI issue. ESXi 5.0 ISO image for systems with software iSCSI configured. Includes ESXi500-201109001 content and software iSCSI fix.
Thanks,
I'm upgrading from 5.0 to 6.0 U2 and the same problem, more than 30 mins on that screen...this is crazy.
And this patch doesn´t cover version 6.X.
Anyone that solved thsi problem??
Thank you in advance.