VMware Cloud Community
mfiedler
Contributor
Contributor

Physical RDM - a natural disaster?

We have recently needed to move from using "iSCSI initiator inside a guest" to "pRDM (physical mode) over FC, inside a guest" in order to present SAN data to a large number of guests.

We are not using MS Clustering, we need to use our SAN vendor's snapshotting ability.

Documentation claims that the real difference between using pRDM and vRDM is the ability to use vmware's snapshots for the data in vRDM mode, so we do not need this functionality.

Example of our scenario:

A given Volume, called prdmtest1 existis on the SAN, and is exported via LUN ID 240 to multiple VMHosts (vmHost1, vmHost2). This is then presented as a phsyicalMode RDM to vmGuest1 asan additional SCSI device

When vmGuest1 is powered either on or off on vmHost1, an attempt to migrate to vmHost2 is blocked with the message:

Unable to migrate: Virtual Disk is a mapped direct access LUN that is not accessible.

I have read through other posts on this subject, and most have alternate workarounds, such as removing the LUN access, removing other elements, but this does not solve the problem of VMotion.

Reply
0 Kudos
7 Replies
formulator
Enthusiast
Enthusiast

if the LUN is already presented to the target host rescan luns on target ESX host and make sure the pRDM LUN is accessible by the target host with the same LUN ID.

Reply
0 Kudos
mfiedler
Contributor
Contributor

Rescans have been done, liberally, and many times over.

Examining the Host -> Configuration -> Storage Adapter -> Devices on both vmHosts shows the volume presented with the same LUN ID on both.

Still no go.

Reply
0 Kudos
runclear
Expert
Expert

Do you see the pRDM disk on the ESX node that you attempting to migrate to? I know you said you "presented" the pRDM to the node, but didnt confirm it was actually there...

We had an issue (not with RDM's) Just your basic FC Storage, where we added some new LUNS from another subsystem and no matter how many time we rescaned the LUNS would NOT show up, it took a reboot of the node for the new LUNS to appear...

-


| VCP[3] | VCP[4]

-------------------- What the f* is the cloud?!
Reply
0 Kudos
Texiwill
Leadership
Leadership

Hello,

It sounds like your LUN is not presented and zoned properly to each ESX host. WHen it comes to FC-SAN ESX is pretty simple. It either works or it does not and when it does not it is either a zoning or presentation issues. Also, the LUN ID has to be low enough to be seen by ESX else you have to modify the disk.MaxLUN value.


Best regards,
Edward L. Haletky VMware Communities User Moderator, VMware vExpert 2009

Virtualization Practice Analyst[/url]
Now Available: 'VMware vSphere(TM) and Virtual Infrastructure Security'[/url]
Also available 'VMWare ESX Server in the Enterprise'[/url]
[url=http://www.astroarch.com/wiki/index.php/Blog_Roll]SearchVMware Pro[/url]|Blue Gears[/url]|Top Virtualization Security Links[/url]|
[url=http://www.astroarch.com/wiki/index.php/Virtualization_Security_Round_Table_Podcast]Virtualization Security Round Table Podcast[/url]

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
Reply
0 Kudos
mfiedler
Contributor
Contributor

Yes, the LUN is exported from the SAN to every ESX Host, and each ESX Host is rescanned, and the LUN appears correctly.

Reply
0 Kudos
mfiedler
Contributor
Contributor

It's the simplicity that's not working.

Zoning is identical, exports are identical, we've gone through it with a fine tooth comb.

Max.LUN value is default, 256, and LUNs are below that number.

Reply
0 Kudos
mcowger
Immortal
Immortal

Does your array have SCSI EVPD pages enabled? Without these, it often wont work...






--Matt

VCP, vExpert, Unix Geek

--Matt VCDX #52 blog.cowger.us
Reply
0 Kudos