VMware Cloud Community
zenomorph
Contributor
Contributor

VMFS vs. Physical Disk performance

I've read alot about the performance of VMFS in comparison to RDM and physical disk comparison.

However what I find is alot of the times when if the comparison is done for eg. dBase performance it usually based on using eg. a SAN with native Physical Win2k3 machine running SQL and then ESX with VMFS running Win2k3 and SQL. Most of the times when they comapare I find they configure the physical servers local disks for optimum usage that is on the physical machine they may mirror the logs and RAID5 the dbase partition etc. and for the camparing VM, theVMFS volume they will do the same similarly define the SAN with mirror lun for the Logs VMDK disk and then the database RAID5 luns.

The reason I'm asking is because we will be setting up an EMC Clariion SAN and running some of our SQL servers which use local storage and by standard we just RAID5 the disks and place the logs and database on seperate partitions or sometimes follow the best practice of mirror log volume and RAID5 the database volume.

But on the Clariion CX-3 we will be just defining standard RAID5 groups for the VMFS volumes. All systems will just use RAID5 luns rather than having seperate RAID1 LUNs and some RAID5 luns just for easier maagement. I understand that for a start comparing the local SAS 300GB 15K disks on HP DL580 with the EMC 300GB 15K FC disks the performance is quite signiticant compared to local. But by the time we RAID5 VMFS volumes with the FC disks what I haven't seen is how much performance difference there really is betwen running the local disk performance and SAN disks.

The thing I'm trying to get at is will be challeged the question whether running the EMC Clariion with RAID5 VMFS performance compared with local, native SAN physical host performance and the RAID5 VMFS performance on the SQL server.

Many thanks

Cheers

Reply
0 Kudos
7 Replies
FredPeterson
Expert
Expert

I'm sure you've seen this?

Basically it comes down to size of the IO.

Even if the VMFS is 95% of the local SAS or direct raw...you have to to sell the benefits of using the shared VMFS.

Reply
0 Kudos
vm_arch
Enthusiast
Enthusiast

To my mind (and years of field experiences hold this to be true)

1. Anything that adds an 'additional layer of complexity' must therefore create some (no matter how small) performance hit. I have clearly seen in the field that when you have mid-range to high-range I/O on a disk, the RDM will always perform better than a VMDK inside a VMFS.

2. For any database over a reasonable size I always recommend clients create the database drive as an RDM - first to remove any I/O hit, and second to give the flexibility to access that RDM-ed LUN on a physical box without the need to 'convert' or V2P it.

Remeber one thing that a lot of people forget... the VMFS mounts from your SAN come to the hosts over the SAME fibre infra as the RDMs (same switch, same HBA etc) - if one VMFS-hosted VM is leech-whoring all the FC bandwidth then having your database be on an RDM is not going to help matters, is it?

Reply
0 Kudos
mreferre
Champion
Champion

See this also: http://communities.vmware.com/message/1330632

In a nutshell .... the (negligible) perf adantages that RDM will provide over VMFS are (usually) off-set by the advantages of encapsulation that VMFS/vmdk provides. Obviously there always be exceptions.

Massimo.

Massimo Re Ferre' VMware vCloud Architect twitter.com/mreferre www.it20.info
Reply
0 Kudos
AlbertWT
Virtuoso
Virtuoso

Oh OK, now I get it, but in case you want to backup the VM which has got RDM attached into it, then you have to set it up as Virtual Compatibility mode

does that means that RDM LUN can still be shared with the other VM ?

Kind Regards,

AWT

/* Please feel free to provide any comments or input you may have. */
Reply
0 Kudos
Texiwill
Leadership
Leadership

Hello,

Depends on the locking involved for the RDM. Some work quite well in virtual mode, others do not. But when you use VCB and other backup tools you must be concerned more about locking than anything else.


Best regards,

Edward L. Haletky VMware Communities User Moderator, VMware vExpert 2009, Virtualization Practice Analyst[/url]
Now Available: 'VMware vSphere(TM) and Virtual Infrastructure Security: Securing the Virtual Environment'[/url]
Also available 'VMWare ESX Server in the Enterprise'[/url]
[url=http://www.astroarch.com/wiki/index.php/Blog_Roll]SearchVMware Pro[/url]|Blue Gears[/url]|Top Virtualization Security Links[/url]|Virtualization Security Round Table Podcast[/url]

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
Reply
0 Kudos
AlbertWT
Virtuoso
Virtuoso

OK, got that.

Thanks for the reply Edward.

Kind Regards,

AWT

/* Please feel free to provide any comments or input you may have. */
Reply
0 Kudos
Texiwill
Leadership
Leadership

Hello,

THere is also one other concern I should mention as I got bit by this during a restore. My Virtual RDM LUN was 384 Gigs. Great, I did my backups religiously and needed to restore. When you restore a virtual RDM it restores as a VMDK. Whoops, my VMFS only allowed a maximum of 256GBs.... I had to scramble to find more storage to restore the Virtual RDM so that I could then do a file level copy from within the RDM back into the Virtual RDM.


Best regards,

Edward L. Haletky VMware Communities User Moderator, VMware vExpert 2009, Virtualization Practice Analyst[/url]
Now Available: 'VMware vSphere(TM) and Virtual Infrastructure Security: Securing the Virtual Environment'[/url]
Also available 'VMWare ESX Server in the Enterprise'[/url]
[url=http://www.astroarch.com/wiki/index.php/Blog_Roll]SearchVMware Pro[/url]|Blue Gears[/url]|Top Virtualization Security Links[/url]|Virtualization Security Round Table Podcast[/url]

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
Reply
0 Kudos