VMFS vs RDM (vs VMDirectPath and other solutions)

Version 3

    VM disk type

    There are different solution for implement a VM disk:

    • (native) vmdk over VMFS datastore (see also the different VMDK virtual disk type)


    • (native) vmdk over NFS datastore (vmdk format usual is thin)


    • virtual RDM (not for NFS datastore)


    • physical RDM (not for NFS datastore)


    • NPIV RDM (only for FC storage)


    • (native) guest iSCSI


    • direct with VMDirectPath I/O (see also VMware VMDirectPath I/O)


     

    How to choose virtual disk for a VM, iSCSI initiator in ESX, iSCSI initiator in OS or RDM

     

     

     

     

    Performance difference

    The difference is minimal and do justify the disadvantages of the physical RDM solution.

    http://www.vmware.com/files/pdf/vmfs_rdm_perf.pdf

    http://www.vmware.com/files/pdf/performance_char_vmfs_rdm.pdf

    http://www.virtualization.info/2008/02/benchmarks-vmware-vmfs-vs-raw-disk.html http://it20.info/blogs/main/archive/2007/06/17/25.aspx

     

    Virtual Compatibility Mode Compared to Physical Compatibility Mode:

    http://pubs.vmware.com/vi35/server_config/wwhelp/wwhimpl/common/html/wwhelp.htm?context=server_config&file=sc_adv_storage.12.6.html

     

    Note that to reach more performace, a more interesting solution is the PVSCI solution:

    http://blogs.vmware.com/performance/2010/02/highperformance-pvsci-storage-adapter-can-reduce-cpio-by-1030.html

     

     

     

    VMDK disk vs RDM disk

    http://malaysiavm.com/blog/vmware-vmfs-vs-rdm-raw-device-mapping/

    VMFS vs. Physical Disk performance

    vmfs or raw for vms? which to choose

    Question re RDM vs VMDK for high performance

    RDM vs. VMFS...again...

    vmdk vs RDM for large disk

    VMFS vs raw data mappings in windows 2008.

    Difference between  a VMDK disk and Virtual RDM disk

     

    Usually the "keep it simple" approach is the best choice... and vmdk over VMFS are very simple...

     

    But anything that adds an "additional layer of abstraction" must therefore create a (no matter how small) overhead. For this reason when you have mid-range to high-range I/O on a disk, the RDM will perform a little better than a vmdk inside a VMFS.

     

    For any database over a reasonable size I could be a good idea to create the database drive as an RDM - first to remove any I/O hit, and second to give the flexibility to access that RDM-ed LUN on a physical box without the need to 'convert' or V2P it.

     

    Remeber one thing that a lot of people forget... the VMFS mounts from your SAN come to the hosts over the SAME fibre infra as the RDMs (same switch, same HBA etc) - if one VMFS-hosted VM is leech-whoring all the FC bandwidth then having your database be on an RDM is not going to help matters, is it?

     

     

     

    VMDK vs VMDirectPath I/O

    http://kb.vmware.com/kb/1010789 - Configuring VMDirectPath I/O pass-through devices on an ESX host

    VMware VMDirectPath I/O

     

    Actually VMDirectPath I/O it the best solution for VM with very high I/O, cause the performance are like in "native" mode.

    But you loose a lot of the advantages of the virtualization: no VMotion, no backup, no cold migration between ESX, ...

     

     

     

    VMDK vs native iSCSI

    Re: iSCSI Performance

    Creating VMFS on multiple internal SCSI disks

     

    This solution could be very simple and "natural" in a iSCSI environment.

    But remember that VM with a guest iSCSI cannot be protected with a backup solution for virtual environment, cause VCB, VDR or similar program can not "see" the iSCSI disks...