Gentlermen and Ladies
I am currently running a 24 node cluster using Dell M630 blades in M1000 Blade Chassis, I am running ESXi 6.0.0. I have 14 RDM drives attached to each of the 24 nodes for two separate MSCS SQL clusters each Cluster has 6 RDM's for a total of 12 mappings per node. I have seen the articles on the RDMs causing an extremely long boot first, it takes 30 to 40 mins just to restart and then the Node gets stuck at VMW_SATP_ALUA for a majority of the boot sequence.
Can anyone tell me what the remedy is and is there a written procedure for fixing it
This might be a good starting point for you
Have you checked the vmkernel.log on the host to identify any issues with connectivity to the RDMs?
Also do you have the RDS's setup as described in the Setup for Failover Clustering documentation? https://pubs.vmware.com/vsphere-60/topic/com.vmware.ICbase/PDF/vsphere-esxi-vcenter-server-60-setup-...
What could be the cause of the delay is that the RDMs used for MSCS are not set to perennialy reserved. During a LUN scan the ESX host tries to claim all the available LUNs presented to the host. If the LUN cannot be claimed because the LUN is already claimed this action can take a very long time. ESXi/ESX hosts with visibility to RDM LUNs being used by MSCS nodes with RDMs may take a long time t...
Erik
I have to ask because I don't know. What is RDS?
RDS is remote desktop services, but I do not think that is what the responder meant. I believe it was a Typo and it should have been "RDM's"
Oops, typo. I mean RDMs of course.
Got it, Thank you gentlemen. I will post what my results are after i am finished configuring and testing