Hi all,
So I'm seeing drastically lower sequential transfer rates in my guest operating system when I switch from the standard virtual disk to a Mapped Raw LUN. And by drastic, I mean that the transfer rate in the guest is less than half when I expect at least the equivalent if not better.
$ dd if=/dev/sdb of=/dev/null bs=1MB count=16384
16384000000 bytes (16 GB) copied, 57.9021 seconds, 283 MB/s
$ dd if=/dev/sdc of=/dev/null bs=1MB count=16384
16384000000 bytes (16 GB) copied, 125.228 seconds, 131 MB/s
The Mapped Raw LUN for Raw Device Mapping (RDM) was configured like so:
$ vmkfstools -a lsilogic -z /vmfs/devices/disks/vmhba2\:3\:0\:0 \
/vmfs/volumes/datastore/guestvm/mydisk.vmdk
cat >> guestvm.vmx << EOF
scsi0:2.present = "true"
scsi0:2.fileName = "/vmfs/volumes/datastore/guestvm/mydisk.vmdk"
scsi0:2.deviceType = "scsi-hardDisk"
scsi0:2.mode = "independent-persistent"
scsi0:2.redo = ""
EOF
Both the standard virtual disk and the Mapped Raw LUN are 2-disk RAID 1 arrays on the same channel in a PowerVault DAS attached to a PERC 6/e controller in an older PowerEdge 1950. The guest operating system is CentOS 5.2 and is running idle alone. The kernel has been passed "elevator=noop", the partitions are mounted with "defaults,noatime,nodiratime,data=journal", and the read ahead buffers have been increased ("blockdev --setra 16384"). Feel free to recommend any other IO optimizations; however, the performance difference does not appear to be due to these settings.
According to VMware and 3rd parties, I should be getting anywhere between marginal and large performance gains, not drops.
Performance Characterization of VMFS and RDM Using a SAN
VMware VMFS Vs RDM (Raw Device Mapping)
Any thoughts what might be wrong? Any improvements for a guest that will handle heavy IO?
Justin