Hello,
I am experiecing a huge performance difference in disk troughput when using VMFS vs. RDMs.
We get 7-8 MB/sec writing when using VMFS and more than 50 MB/sec when using RDMs.
Environment is as follows:
2x ESX 3.5 (latest) on IBM Blades HS 21 XM (Qlogic HBAs)
Storage is IBM DS 4700
IBM / Brocade 4 GB FC-Switches
One VM which shows the Problem is Win 2003 Server as a File Server.
Any Ideas??
thanks in advance!
tka
hi eddy,
vmfs should be aligned because I created it using the vi client.
What do you mean by your 2nd question? The RDMs point to directly to the LUN in the SAN.
tka
Are you running the latest version of VMware tools?
Did you align the Guest OS partitition?
How many vm's are running on that single VMFS volume?
Are you running snapshots on any of the vms?
There are multiple factors that can cause this behavior....
Duncan
VMware Communities User Moderator
-
If you find this information useful, please award points for "correct" or "helpful".
when you setup an RDM you need to point its metadata (pointer) to a VMFS volume. Where does this volume reside? On the SAN, local drives, etc?
Go Virtual!
hi guys - thanks for you input so far.
to your questions:
>Are you running the latest version of VMware tools?
Yes.
> Did you align the Guest OS partitition?
I just did a format as ntfs in windows guest. how do a do this?
> How many vm's are running on that single VMFS volume?
I did different tests with more and less VMs - I think thats not the Problem.
> Are you running snapshots on any of the vms?
No.
> when you setup an RDM you need to point its metadata (pointer) to a VMFS volume. Where does this volume reside? On the SAN, local drives, etc?
The RDM meta data file (pointer) is in one of the other VMFSs in the SAN and it points to a SAN LUN.
(There are 2 VMFSs and 2 RDMs in sum. )
tka
no ideas?
tka
Could you remove the RDM and then put a VMFS on that same LUN, then try? Just to make sure the VMFS volume you use now is not used by other VMs or has a lot less spindles, different RAID level etc.
Visit my blog at http://erikzandboer.wordpress.com
Hi Erik,
thanks for your comment.
Thats exactly what I did - my tests RDM vs. VMFS were on that same LUN/Array, so spindles und RAID-level are the same.
Meanwhile we are productive with RDMs with some VMs so testing is kind of complicated right now. I can't remember the exact
number of VMs running in that same VMFS while testing but no more than 5-6 I think, so thats not the problem I think.
There must be something I missed - I never had such a big difference in disk performance in similar environments...
tka
Ok, from what I have seen RDMs are only marginally faster than VMFS (and usually it is not worth the downsides of physical RDMs). Pity you cannot do further testing, it sounds like something that was "forgotten"... It could also be that the streipesize for the RAID volume was extremely small. I have done measurements once on parallel SCSI RAID5. Especially at block sizes of <=4KB performance was quite bad. The larger blocksize the better for VMFS was the result. Also, the impact of misalignment on the NTFS was very big at small block sizes. At larger blocksizes the impact was much less.
If RDMs do not suffer from this impact, you could potentially see big differences. But then again, that was parallel SCSI in a completely unsupported configuration....
Visit my blog at http://erikzandboer.wordpress.com
I did check these but didn't see anything I recognized as problematic - I hope I didn't miss anything.
But: when I checked perfmon inside the VM, the disk queue was at 100%...
tka