VMware Cloud Community
REALM
Contributor
Contributor
Jump to solution

Please clarify downside to RDM, and how do I change vmfs lun to RDM?

I read through many threads and still don't fully understand the downside to RDM.

#1 So technically the VM just has a pointer to the SAN lun with RDM, correct? And I could present that RDM disk easily to any server and see the data, and there may be a slight performance advantage with RDM.

If that's the case, what is the advantage of VMFS over RDM with a SAN?

#2 Can a san LUN be broken up into multiple RDM disks going to different VM's?

#3 If I have an unused VMFS san lun of 600GB, and want to make that an RDM lun, how do I do that? Do I have to fdisk and unformat the thing first? Then remove from storage and re-add?

Using ESX 3.5 Update 2 and vCenter 2.5 Update 2.

Thank you.

0 Kudos
1 Solution

Accepted Solutions
vmroyale
Immortal
Immortal
Jump to solution

#1 So technically the VM just has a pointer to the SAN lun with RDM, correct? And I could present that RDM disk easily to any server and see the data, and there may be a slight performance advantage with RDM.

Correct.

If that's the case, what is the advantage of VMFS over RDM with a SAN?

Advantages of VMFS are that you can put lots of virtual disks on the one volume. Its a one-to-many relationship where the RDM is a one-to-one relationship of one LUN per VM. There is currently a maximum limit of 256 LUNs per ESX host, so if you had a lot of VMs you could get into trouble this way. 3 RDMs per 100 hosts would be a problem. Another consideration is that the more RDMs you add, the more you complicate the management of your storage environment. Its much easier to manage 3 or 4 LUNs than 30-40 of them. Typically RDMs are great when you need MSCS, want to leverage SAN backup options, or if you have LUNs that may need to be ported to a physical server at some point.

#2 Can a san LUN be broken up into multiple RDM disks going to different VM's?

I think you would want to break the LUNs up as well. Its much easier to manage those one-to-one relationships.

#3 If I have an unused VMFS san lun of 600GB, and want to make that an RDM lun, how do I do that? Do I have to fdisk and unformat the thing first? Then remove from storage and re-add?

Not exactly sure, but I would think you could remove the datastore from the ESX host, and then add it as a RDM to a virtual machine. fdisk should be able to fix it! But back to point 2 - it might make more management sense to carve that 600GB up and distribute it as needed.

Good Luck!

Brian Atkinson | vExpert | VMTN Moderator | Author of "VCP5-DCV VMware Certified Professional-Data Center Virtualization on vSphere 5.5 Study Guide: VCP-550" | @vmroyale | http://vmroyale.com

View solution in original post

0 Kudos
2 Replies
vmroyale
Immortal
Immortal
Jump to solution

#1 So technically the VM just has a pointer to the SAN lun with RDM, correct? And I could present that RDM disk easily to any server and see the data, and there may be a slight performance advantage with RDM.

Correct.

If that's the case, what is the advantage of VMFS over RDM with a SAN?

Advantages of VMFS are that you can put lots of virtual disks on the one volume. Its a one-to-many relationship where the RDM is a one-to-one relationship of one LUN per VM. There is currently a maximum limit of 256 LUNs per ESX host, so if you had a lot of VMs you could get into trouble this way. 3 RDMs per 100 hosts would be a problem. Another consideration is that the more RDMs you add, the more you complicate the management of your storage environment. Its much easier to manage 3 or 4 LUNs than 30-40 of them. Typically RDMs are great when you need MSCS, want to leverage SAN backup options, or if you have LUNs that may need to be ported to a physical server at some point.

#2 Can a san LUN be broken up into multiple RDM disks going to different VM's?

I think you would want to break the LUNs up as well. Its much easier to manage those one-to-one relationships.

#3 If I have an unused VMFS san lun of 600GB, and want to make that an RDM lun, how do I do that? Do I have to fdisk and unformat the thing first? Then remove from storage and re-add?

Not exactly sure, but I would think you could remove the datastore from the ESX host, and then add it as a RDM to a virtual machine. fdisk should be able to fix it! But back to point 2 - it might make more management sense to carve that 600GB up and distribute it as needed.

Good Luck!

Brian Atkinson | vExpert | VMTN Moderator | Author of "VCP5-DCV VMware Certified Professional-Data Center Virtualization on vSphere 5.5 Study Guide: VCP-550" | @vmroyale | http://vmroyale.com
0 Kudos
mike_laspina
Champion
Champion
Jump to solution

Hi

The up side of vmfs based storage is the hardware autonomy factor and the ease of relocation.

Here is an example of what's possible using vmfs based storage.

http://blog.laspina.ca/ vExpert 2009