VMware Cloud Community
habibalby
Hot Shot
Hot Shot
Jump to solution

Adding New Storage "The Operation TimedOut"with RDM LUNs across hosts. Pls Help

Hello,

Setup:

4 ESX 3.5 Hosts each with 2 HBA connected to HP MSA1000.

Issue:

I have configured MSCS SQL 2000 with RDM across-hosts, MSCS failover no problem, performance wise not a problem at all too. But the problem is If the SQL MSCS Resource is being handled by the Active VM that's running on host1, I can browse to the Storage / Adding LUNs, Formatting new LUNs without any issue.

If I want to Add new Storage on the other host "host2" I got error "Request Time Out" Is it because of the RDM resource being busy serving the active VM that's why the host not allowing modification to adding new Storage?

All the hosts are seeing these LUNs. I can add datastore or do a rescan only while the LUNs are not presented to the host. Also I can Browse the Datastore, able to see the Add Storge Wizard when Adding new

DataStore and do a Rescan only on the host where the Active SQL Node running. Suppose the RDM Luns preseneted to the host1, and host2 and the SQL VM running on host1, I can only browse the datastores, do a rescan and add new storge to that host. I cannot do the same with

hosts2.

But, if I remove those LUNS "RDM Luns" from host2, host3 & host4. I can do a rescan, adding other LUNs than the RDM presented to the VM, able to see the Adding Storage Wizard.

I have googled the error and found that the lincese server is issue, nor DNS

Best Regards,

Hussain Al Sayed

Message was edited by: habibalby

Best Regards, Hussain Al Sayed Consider awarding points for "correct" or "helpful".
Reply
0 Kudos
1 Solution

Accepted Solutions
emcclend
Enthusiast
Enthusiast
Jump to solution

I had similar to issues to both problems here in which I could not add a datastore as it would time out and I had long boot times. I'm also using RDM's which I use for 2 MSCS accross boxes. Below is a quote from my previous post in which I found an answer that helped me. Changingthe SCSI Retry time increased my boot time and allowed me to again add datastores without the timeout issue.

Previous Post Answer:

I have been doing some research and I think I have made some progress. I found out that I have been getting a lot of SCSI errors in the VMkernel log. I did some more digging and found out that I can change the SCSI retry times from 80 to 10 and it did wonders from my reboot time. Now instead of taking 20 minutes to boot up, it takes less than 5 minutes now. Much better. I made the change in the host configuration -> Advance Setting -> SCSI -> SCSI Retry. 80 was the default and 10 was suggested as being a good value. This has helped and I will be keeping an eye on what the effect may be doing but so far it has helped with boot times.

View solution in original post

Reply
0 Kudos
5 Replies
ShanVMLand
Expert
Expert
Jump to solution

Just create the RDM on your first node (using the virtual, not physical, option), then when you add disk to the other nodes select existing and point it at the vmdk that corresponds to the RDM. It is important that the RDM be on a separate SCSI channel, and that you set the controller for that channel to Shared bus.

Thanks,

Shan

If you find this post helpful, please award points using the Helpful/Correct buttons...Thanks

If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!!!
Reply
0 Kudos
habibalby
Hot Shot
Hot Shot
Jump to solution

VMDK disk comes from vmfs partition. I'm clustering directly RDM LUN connected to the VMs. MSCS determines the NTFS partition active on whcih node, based on the Active Resources being managed by the Active Node.

What you are describing is to present the LUN to the ESX, formatted as VMFS, then create the VMDK disk and present them to the VM.

Best Regards,

Hussain Al Sayed

If you find this information useful, please award points for "correct" or "helpful".

Best Regards, Hussain Al Sayed Consider awarding points for "correct" or "helpful".
Reply
0 Kudos
habibalby
Hot Shot
Hot Shot
Jump to solution

Hello,

I got the issue why the ESX Host is timed out when using MSCS Cluster. Because the SCSI disk is being used "Busy" on the Active Node "MSCS Node" and the Host cannot initilize the request.

So the answer for then is a SCSI Reservation Conflicts. When I change the Settings in each host --> Configuration -->Advanced Settings "SCSI.ConflictsRetries =80" which is the Default. After changing this settings and put "10" instead of "80" The problem solved.

Best Regards,

Hussain Al Sayed

If you find this information useful, please award points for "correct" or "helpful".

Best Regards, Hussain Al Sayed Consider awarding points for "correct" or "helpful".
Reply
0 Kudos
emcclend
Enthusiast
Enthusiast
Jump to solution

I had similar to issues to both problems here in which I could not add a datastore as it would time out and I had long boot times. I'm also using RDM's which I use for 2 MSCS accross boxes. Below is a quote from my previous post in which I found an answer that helped me. Changingthe SCSI Retry time increased my boot time and allowed me to again add datastores without the timeout issue.

Previous Post Answer:

I have been doing some research and I think I have made some progress. I found out that I have been getting a lot of SCSI errors in the VMkernel log. I did some more digging and found out that I can change the SCSI retry times from 80 to 10 and it did wonders from my reboot time. Now instead of taking 20 minutes to boot up, it takes less than 5 minutes now. Much better. I made the change in the host configuration -> Advance Setting -> SCSI -> SCSI Retry. 80 was the default and 10 was suggested as being a good value. This has helped and I will be keeping an eye on what the effect may be doing but so far it has helped with boot times.

Reply
0 Kudos
chakrit
Contributor
Contributor
Jump to solution

ShanVMLand, your answer is definitely helpful but if you're going to copy somebody else's answer, yo...

see: http://www.experts-exchange.com/Software/VMWare/Q_24060124.html posted by Evan. 31/01/09 06:47 AM

Sorry it really annoys me to see somebody who does that.

On a separate note, yes I can add RDM to 2 redhat5.5 cluster nodes based on Evan's note. I'll crack on with setting up cluster suite for testing.

Reply
0 Kudos