VMware Cloud Community
schambers
Contributor
Contributor

Performance question iSCSI, VMDK , RDM disks

I am sure that there are many factors involved here but when standing up new servers it seems we are always wondering which virtual disk option should we select and what would give us the best performance. We have an enterprise class san and the network is 10gb.

Once upon a time it seemed that using an in-guest iSCSI connection from within the VM to the SAN offered the best performance. From what research I have done it doesn’t seem like that is really the case anymore and the only reason to go with something other than VMDK would be if there was some sort of application requirement. But finding documentation on specific application requirements in a VMware environment is not an easy thing.

Somehow I did come across documentation (VMware\Microsoft?) for Exchange 2016 where the VM should reside on a VMDK and the database and logs should reside on separate RDM disks so I got lucky there.

Unless specified, is there any reason to choose anything but VMDK these days? Does one storage option (iSCSI, VMDK or RDM) really outperform the other anymore? Just curious, what are you using in your environment or how did you make that decision? I think in our environment my main concerns would be with Exchange, SQL and our File Servers. The servers being beat up the most my end users.

I am not really too technical so my apologies if my questions are too generic.

Thank you!

0 Kudos
1 Reply
JaredGudnason
Enthusiast
Enthusiast

I too have asked this question in the past.

The Answer is really, no great difference.

I have tested performance stats against RDM physical vs virtual, in guest Iscsi, and vmdk, and found them all to be very similar. 

From a technical perspective, in guest (or any software variant) of iScsi does come with some CPU overhead.

my rule of thumb is everythink on vmdk unless there is a good reason otherwise.  So far, the only reason i have is for a file server cluster.  Either physical RDM or in-guest to support the clustering.  I Chose in-guest iscsi, as i'm running a nimble, and that was their preferred path, in conjunction with their NCM software.  It's working very well for me.

(also as a slight tangent - gotcha.  I was using virtual connect (HPE) hardware iscsi initiators, which i discovered later weren't on Nimble's HCL.  they did cause me some issues on head fail-over.  moved to the vmware software initiator, and all was well with the world.)

all that being said, vmdk, but i have started transitioning to vVol's.  I would say if you have an array that supports them (and do careful research here), vvol will give you the best of both worlds.

Of course, if you are running an array which is NFS based, you basically already have a flavor of that, but then you wouldn't be asking about block mounting options either would you?  lol

-------

TL:DR;  VMDK unless you cant (clustering)

0 Kudos