VMware Cloud Community
macpiano
Contributor
Contributor
Jump to solution

Is it possible to do RDM to my Dell Equallogic SAN?

I have this connectec now by iSCSI but was wondering if I could do the Raw Device Mapping. It is greyed out and the server is off which I understand is necessary for it to work.

Must be missing something.

0 Kudos
1 Solution

Accepted Solutions
Gkeerthy
Expert
Expert
Jump to solution

yes it is possible,

first configure the ISCSI for the DELL storage, > then create a LUN and present to the ESX > then do a storage scan > then dont create a VMFS volume

> now go to the VM > and add RDM

http://www.vmadmin.co.uk/vmware/35-esxserver/58-rdmvm

Please don't forget to award point for 'Correct' or 'Helpful', if you found the comment useful. (vExpert, VCP-Cloud. VCAP5-DCD, VCP4, VCP5, MCSE, MCITP)

View solution in original post

0 Kudos
3 Replies
Gkeerthy
Expert
Expert
Jump to solution

yes it is possible,

first configure the ISCSI for the DELL storage, > then create a LUN and present to the ESX > then do a storage scan > then dont create a VMFS volume

> now go to the VM > and add RDM

http://www.vmadmin.co.uk/vmware/35-esxserver/58-rdmvm

Please don't forget to award point for 'Correct' or 'Helpful', if you found the comment useful. (vExpert, VCP-Cloud. VCAP5-DCD, VCP4, VCP5, MCSE, MCITP)
0 Kudos
macpiano
Contributor
Contributor
Jump to solution

thanks, I have it up and running, all 10 TB.

0 Kudos
dwilliam62
Enthusiast
Enthusiast
Jump to solution

If you are not using the Dell MEM v1.1.1 software to improve the MPIO performance, you need to set the Path Selection Policy to VMware Round Robin and change the IOs per path to 3.  (default is 1000 which doesn't balance IO across all paths effectively)

This script will change all EQL volume to Round Robin and set the IOPs to 3.  You need to run this on ALL nodes and re-run it, when you add new volumes.

#esxcli storage nmp satp set --default-psp=VMW_PSP_RR --satp=VMW_SATP_EQL ; for i in `esxcli storage nmp device list | grep EQLOGIC|awk '{print $7}'|sed 's/(//g'|sed 's/)//g'` ; do esxcli storage nmp device set -d $i --psp=VMW_PSP_RR ; esxcli storage nmp psp roundrobin deviceconfig set -d $i -I 3 -t iops ; done

After you run the script you should verify that the changes took effect.
#esxcli storage nmp device list

This blog which has info agreed upon by Vmware, Dell/EQL, etc  has more info on this.

http://virtualgeek.typepad.com/virtual_geek/2009/09/a-multivendor-post-on-using-iscsi-with-vmware-vs...

You should also disable Delayed ACK and Large Recieve Offload (LRO)

How to Disable Delayed ACK

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=100259...

Solution Title
HOWTO: Disable Large Receive Offload (LRO) in ESX v4/v5
Solution Details
Within VMware, the following command will query the current LRO value.

# esxcfg-advcfg -g /Net/TcpipDefLROEnabled

To set the LRO value to zero (disabled):

# esxcfg-advcfg -s 0 /Net/TcpipDefLROEnabled

NOTE: a server reboot is required.


Info on changing LRO in the Guest network.

http://docwiki.cisco.com/wiki/Disable_LRO

Also, you should add a Virtual SCSI controller for each RDM, (up to four Max per VM).   This will provide much better IO, otherwise the single SCSI controller can only access one VMDK/RDM at a time. 

Regards,

Don

0 Kudos