VMware Cloud Community
larden
Contributor
Contributor
Jump to solution

Question re RDM vs VMDK for high performance

We are installing a mail archiving program onto a virtual machine server Windows 2003 SR2 SP2. The indexes which will grow to approx 500-600GB in size will be on a volume on the ESX guest. This volume will be 600GB and performance is apparently VERY important.

Do I present this as a RAW 600GB LUN to the Windows OS or do I create a 600GB VMFS LUN and create the volume as a VMDK. Since this is an index there is likely alot of sequential reads/writes I expect.

Performance again being the most important factor, does it matter?

Thanks

VMware Rocks!
0 Kudos
1 Solution

Accepted Solutions
Eddy
Commander
Commander
Jump to solution

I run an "ALL" RDM shop. You wont gain performance, but you will gain flexability. When using RDMs your not touching the file system, so you can present that RDM to any server, you can snap it and present it to a Laptop via iSCSI for testing and defrags, etc..

You gain "Flexabilty" not speed...

Go Virtual!

View solution in original post

0 Kudos
15 Replies
esiebert7625
Immortal
Immortal
Jump to solution

Performance between RDM and a regular virtual disk will be about the same. See this guide for more info on RDM's.

http://www.vmware.com/pdf/esx25_rawdevicemapping.pdf

Storage performance thread...

http://www.vmware.com/community/thread.jspa?threadID=73745

steve31783
Enthusiast
Enthusiast
Jump to solution

As stated above, there are no performance gains using RDM over VMFS, but it is required for SAN tool access.

If using RDM for a Cluter, clustered (MSCS) virtual machines should not be part of Vmware (DRS/HA) Clusters

Please feel free to assign points to complete and/or helpful answers. Let me know if you have any other questions,

kharbin
Commander
Commander
Jump to solution

What performance numbers are you trying to achieve? What throughput in Mb/sec? IOs/sec?

What are your MBs/sec, IOs/sec when using a VMDK vs. a RAW LUN?

Do you have guaranteed SLA/QOS requirements to meet?

How will this VM be backed up? How easy to restore? What is the return to operation time for this VM if there is an outage?

Without an understanding of the capabilites of the environment, and your expected performance levels, and operational parameters, really can't answer "does it matter?". Need more info.

0 Kudos
larden
Contributor
Contributor
Jump to solution

I don't have the answers to all your questions - but really my question is what is the difference in performance same lun either configured as VMDK or RDM. About the same? Exactly the same?

Thanks

VMware Rocks!
0 Kudos
esiebert7625
Immortal
Immortal
Jump to solution

The physical architecure (disk, controller, path) using either VMFS or RDM is the same, you are not going to see any performance benefit using RDM's.

0 Kudos
acr
Champion
Champion
Jump to solution

I did some testing recently with a couple of VM's re: RDM or VMDK although it was for iSCSI testing.. The results were the same..

0 Kudos
bertdb
Virtuoso
Virtuoso
Jump to solution

true, no performance difference, if you are talking about a non-fragmented VMFS.

With fragmentation, VMFS speed can be slower, because the block mapping becomes difficult. Without fragmentation, the block mapping is just adding an offset (the start of the vmdk)

0 Kudos
fordian
Hot Shot
Hot Shot
Jump to solution

Hello,

Go to this link (section4) for disk performance.

download3.vmware.com/vmworld/2006/labs2006/vmworld.06.lab04-PERFORMANCE-MANUAL.pdf.

Thank you

Dominic

0 Kudos
jhanekom
Virtuoso
Virtuoso
Jump to solution

Agree fully on the performance front - you will not be able to see any difference in performance.

Base your decision on other factors, such as those suggested by kharbin.

If this were my setup, I would configure it as an RDM in virtual mode. Not for performance reasons, but because it doesn't make sense to me (in most cases anyway) to place a single 600GB VMDK on a 600GB VMFS volume. If this were a 50GB VMDK, my recommendation would be different.

You don't state whether this volume will be on the SAN or not, but in general takign this approach also makes it a great deal easier to "fall back" to a physical solution in the event that you have to prove that it's not the VMware environment that's causing performance problems - a common claim amongst application owners that are not comfortable with the technology.

0 Kudos
oschistad
Enthusiast
Enthusiast
Jump to solution

Actually there is indeed a difference between vmdk and raw device maps, for high random I/O. Admittedly it is not large, and it is dwarfed by the effect of the storage system itself, but it is real - 10-20% higher latency.


For sequential access however (file copying etc) there is no difference between any of the disk modes - the medium itself is your limiting factor.

I know this because I recently performed a series of benchmarks comparing physical server to virtual machines and its three disk modes, using the same physical LUN in all tests. The physical server was able to generate about 20% more IO ops per second in a 100% random test, but this performance gain was negligible compared to the increase in performance by adding more spindles, even for a VM. My conclusion there was that physical VS virtual server is besides the point - work on the storage system itself if you need high performance random IO.

0 Kudos
jhanekom
Virtuoso
Virtuoso
Jump to solution

Any chance the random IO test was the first test you conducted on that VM? Always run the first test at least twice to get proper results.

Keep in mind that the default disk mode when creating a VMDK is "lazy zeroed", meaning that ESX will write zeroes to disk the first time you read from or write to a sector.

This reduces disk creation time, but impacts on performance the first time a section of disk is used.

(The alternative is to use "eager zeroed" disks, but this is only possible from the service console using vmkfstools.)

0 Kudos
oschistad
Enthusiast
Enthusiast
Jump to solution

The test suite used was IOmeter with the config posted in the thread referenced earlier in here.


The test starts by preallocating a file of a size equal to your test - 4GB in my case. In other words, the sectors being tested were written to \_before_ the test started. Anyhow, there are \_two_ tests that benchmark high random IO, and neither of them were first out.

Finaly it makes perfect sense that a VMDK would have a higher latency on disk access than an RDM due to the higher degree of indirection involved. This is the same as raw versus formatted disks for databases-you get best performance with no file system, but it sucks for maintenance purposes which is why most DBAs prefer to use a disk with a file system on it. And anyway, the difference between the different methods are negligible compared to the impact of spindle count, write cache size etc on your storage system.

0 Kudos
Eddy
Commander
Commander
Jump to solution

I run an "ALL" RDM shop. You wont gain performance, but you will gain flexability. When using RDMs your not touching the file system, so you can present that RDM to any server, you can snap it and present it to a Laptop via iSCSI for testing and defrags, etc..

You gain "Flexabilty" not speed...

Go Virtual!
0 Kudos
Alan_McLachlan
Contributor
Contributor
Jump to solution

Actually, backup is more important than perfomance. And in a SAN environment, configuring the data LUN's as RDM's formatted with NTFS provides much better integration with backup software SAN agents for online backups.

Basically, you can snapshot that NTFS LUN natively on the SAN, then mount the snapshot version of the LUN directly on your backup media server and back it up without having I/O load on the VM and no open file issues. You can't do that with with VMFS and vmdk's (at least not right now).

I tend to recommend vmdk's to hold stubs for RDM's so they are vmotion compatible, and use the native SAN features to manage golden copies, backup snapshots etc.

Also it makes it very easy to snap off copies of production data to then mount on the dev or test hosts, regardless of whether those hosts are VMWare or native. VMFS snapshots are limited to being mounted on other guest VM's on the same VMWARE server, and consume heavy local I/O for copy-on-write. Whereas SAN managed snapshots consume effectively zero I/O on the hosts or the guest OS's. This only serves to increase the performance gap between RDM's and VMFS.

This approach also makes p2v and v2p quite simple and safe, since the actual application data never gets touched.

0 Kudos
lfchin
Enthusiast
Enthusiast
Jump to solution

I will suggest to go for RDM if you need to have perfomance. The only sacrifice is to manage the VM with similiar to a physical server. out of that, the RDM is really out perform compare to VMDK.

Craig http://malaysiavm.com
0 Kudos