1 person found this helpful
Performance between RDM and a regular virtual disk will be about the same. See this guide for more info on RDM's.
Storage performance thread...
1 person found this helpful
As stated above, there are no performance gains using RDM over VMFS, but it is required for SAN tool access.
If using RDM for a Cluter, clustered (MSCS) virtual machines should not be part of Vmware (DRS/HA) Clusters
Please feel free to assign points to complete and/or helpful answers. Let me know if you have any other questions,
What performance numbers are you trying to achieve? What throughput in Mb/sec? IOs/sec?
What are your MBs/sec, IOs/sec when using a VMDK vs. a RAW LUN?
Do you have guaranteed SLA/QOS requirements to meet?
How will this VM be backed up? How easy to restore? What is the return to operation time for this VM if there is an outage?
Without an understanding of the capabilites of the environment, and your expected performance levels, and operational parameters, really can't answer "does it matter?". Need more info.
I don't have the answers to all your questions - but really my question is what is the difference in performance same lun either configured as VMDK or RDM. About the same? Exactly the same?
The physical architecure (disk, controller, path) using either VMFS or RDM is the same, you are not going to see any performance benefit using RDM's.
I did some testing recently with a couple of VM's re: RDM or VMDK although it was for iSCSI testing.. The results were the same..
true, no performance difference, if you are talking about a non-fragmented VMFS.
With fragmentation, VMFS speed can be slower, because the block mapping becomes difficult. Without fragmentation, the block mapping is just adding an offset (the start of the vmdk)
Go to this link (section4) for disk performance.
Agree fully on the performance front - you will not be able to see any difference in performance.
Base your decision on other factors, such as those suggested by kharbin.
If this were my setup, I would configure it as an RDM in virtual mode. Not for performance reasons, but because it doesn't make sense to me (in most cases anyway) to place a single 600GB VMDK on a 600GB VMFS volume. If this were a 50GB VMDK, my recommendation would be different.
You don't state whether this volume will be on the SAN or not, but in general takign this approach also makes it a great deal easier to "fall back" to a physical solution in the event that you have to prove that it's not the VMware environment that's causing performance problems - a common claim amongst application owners that are not comfortable with the technology.
Actually there is indeed a difference between vmdk and raw device maps, for high random I/O. Admittedly it is not large, and it is dwarfed by the effect of the storage system itself, but it is real - 10-20% higher latency.
For sequential access however (file copying etc) there is no difference between any of the disk modes - the medium itself is your limiting factor.
I know this because I recently performed a series of benchmarks comparing physical server to virtual machines and its three disk modes, using the same physical LUN in all tests. The physical server was able to generate about 20% more IO ops per second in a 100% random test, but this performance gain was negligible compared to the increase in performance by adding more spindles, even for a VM. My conclusion there was that physical VS virtual server is besides the point - work on the storage system itself if you need high performance random IO.
Any chance the random IO test was the first test you conducted on that VM? Always run the first test at least twice to get proper results.
Keep in mind that the default disk mode when creating a VMDK is "lazy zeroed", meaning that ESX will write zeroes to disk the first time you read from or write to a sector.
This reduces disk creation time, but impacts on performance the first time a section of disk is used.
(The alternative is to use "eager zeroed" disks, but this is only possible from the service console using vmkfstools.)
The test suite used was IOmeter with the config posted in the thread referenced earlier in here.
The test starts by preallocating a file of a size equal to your test - 4GB in my case. In other words, the sectors being tested were written to \_before_ the test started. Anyhow, there are \_two_ tests that benchmark high random IO, and neither of them were first out.
Finaly it makes perfect sense that a VMDK would have a higher latency on disk access than an RDM due to the higher degree of indirection involved. This is the same as raw versus formatted disks for databases-you get best performance with no file system, but it sucks for maintenance purposes which is why most DBAs prefer to use a disk with a file system on it. And anyway, the difference between the different methods are negligible compared to the impact of spindle count, write cache size etc on your storage system.
I run an "ALL" RDM shop. You wont gain performance, but you will gain flexability. When using RDMs your not touching the file system, so you can present that RDM to any server, you can snap it and present it to a Laptop via iSCSI for testing and defrags, etc..
You gain "Flexabilty" not speed...
Actually, backup is more important than perfomance. And in a SAN environment, configuring the data LUN's as RDM's formatted with NTFS provides much better integration with backup software SAN agents for online backups.
Basically, you can snapshot that NTFS LUN natively on the SAN, then mount the snapshot version of the LUN directly on your backup media server and back it up without having I/O load on the VM and no open file issues. You can't do that with with VMFS and vmdk's (at least not right now).
I tend to recommend vmdk's to hold stubs for RDM's so they are vmotion compatible, and use the native SAN features to manage golden copies, backup snapshots etc.
Also it makes it very easy to snap off copies of production data to then mount on the dev or test hosts, regardless of whether those hosts are VMWare or native. VMFS snapshots are limited to being mounted on other guest VM's on the same VMWARE server, and consume heavy local I/O for copy-on-write. Whereas SAN managed snapshots consume effectively zero I/O on the hosts or the guest OS's. This only serves to increase the performance gap between RDM's and VMFS.
This approach also makes p2v and v2p quite simple and safe, since the actual application data never gets touched.