This thread is a follow-up to the following threads since these seem to be related: http://www.vmware.com/community/thread.jspa?threadID=74329 http://www.vmware.com/community/thread.jspa?thread...
See more...
This thread is a follow-up to the following threads since these seem to be related: http://www.vmware.com/community/thread.jspa?threadID=74329 http://www.vmware.com/community/thread.jspa?threadID=75807 http://www.vmware.com/community/thread.jspa?threadID=77075 Description of the issues and "results" we have so far. juchestyle and sbeaver saw a significant degradation of network throughput on 100 full virtual switches. The transfer rate never stabilizes and there are significant peaks and valleys when a 650 meg iso file gets transferred from a physical server to a vm. Inspired from this I did some short testing with some strange results: The transfer direction had a significant impact to the transfer speed. Pushing files from VMs to physical servers was always faster (around 30%) than pulling files from servers. The assumption that this is related to the behaviour of Windows servers was wrong, since this happened regardless of the OS and protocol used. Another interesting result from these tests: e1000 NICs always seem to be 10-20% faster than the vmxnet and that there is a big difference in PKTTX/s with vmxnet and e1000. After that acr discovered real bad transfer speeds in a Gigabit VM environment. The max speed was 7-9 MB/s, even when using ESX internal vSwitches. A copy from ESX to ESX reached 7-9 MB/s too. The weird discovery in this scenario: when disabling CDROMs in the VMs the transfer speed goes up to 20 MB/s. Any ideas regarding this? I'll mark my question as answered and ask Daryll to lock the thread so we have everything in one thread.