My configuration is:
2 pcs. ESXi host installed at Supermicro MB X9DRH-7TF, 2x E5-2609v2, 64GB RAM, diskless
2 pcs. Open-e DSS v7 storage installed to Supermicro X9DRH-7TF, 2x E5-2609v2, 32GB RAM, 4x 4TB WD4000FYYZ in RAID5, built-in RAID performace read test: 472MB/s
The Supermicro X9DRH-7TF motherboard has onboard two 10GBase-T interfaces.
2 pcs. switch Netgear 8-port 10GBase-T ProSAFE Plus XS708E
Performing Storage vMotion from one storage to another of thin provisioned VM I am getting only 90MB/s data transfer rate (Esxi 5.5), compared to 1Gb LAN 40-65MB/s. Somewhat better I get with ESX5.1, 120MB/s over 10Gb LAN.
Thick provisioned VM performs much better, 280 MB/s (ESXi 5.5); 200MB/s (ESXi 5.1).
Further, the performance of ESXi 5.1 is 15% better with "iSCSI no delayed ACK". ESXi5.5 performs with "iSCSI no delayed ACK" the same as w/o, or slightly worse.
There is no performance improvement using iSCSI multipath round-robin compared to fixed path.
Confusing: Watching ESXTOP during Storage vMotion of thin provisioned I can see that the LAN Tx rate is double higher than Rx rate (Rx rate matches actual speed of Storage vMotion). Why is that? This is not the case with a thick provisioned VM; Tx and Rx speeds are the same.
Is that the best what I can expect from this configuration?
Is there a way to enhance Storage vMotion performance of thin provisionined VM, in light of confusing ESXTOP readings?
This could be an issue with your Open-e DSS v7 not supporting VAAI yet.
It looks like it's still in testing with Open-E
Have you looked in the kernel logs etc. for any SCSI errors during the thin provisioned vmotion versus the log files during a thick disk vmotion?