Our research note exploring the use of InfiniBand in Red Hat guests with VM DirectPath I/O is now available for your reading pleasure here. In it, we present the results of bandwidth and latency tests run using two hosts connected back-to-back with Mellanox quad data rate (QDR) InfiniBand. We show that bandwidths over a wide range of messages sizes are comparable to those achievable in the non-virtualized case and that low latencies (under 2us) are achievable as well.
This paper represents our first performance results using RDMA. We expect to publish further papers examining the performance of full MPI applications as well as results related to Bhavesh's work on a virtualized RDMA device that would support RDMA while also maintaining the ability to perform vMotion and Snapshot operations.