VMware Cloud Community
oreeh
Immortal
Immortal
Jump to solution

Surprising performance differences of VM NIC types and transfer direction

Inspired from this thread http://www.vmware.com/community/thread.jspa?threadID=74329 I've done a few tests regarding network performance of VMs using the different NIC types.

The results are a bit confusing since the E1000 is always 10-20% faster than the vmxnet (which I thought was optimized).

Another interesting things is, that pushing files from VMs to physcial servers is always faster than pulling the files.

Can someone shed some light on this?

Reply
0 Kudos
1 Solution

Accepted Solutions
Daryll
Expert
Expert
Jump to solution

At the request of the original poster, I am locking this thread. It's not a behavior issue or anything like that, the OP wants all of the comments/conversation in one place.

If you would like to follow up on this topic OR make comments, there is another thread that they would prefer you added to:

http://www.vmware.com/community/thread.jspa?threadID=77227

Thanks,

Daryll

View solution in original post

Reply
0 Kudos
13 Replies
oreeh
Immortal
Immortal
Jump to solution

Another thing that bothers me is the big difference in PKTTX/s with vmxnet and e1000.

This AFAIK means that vmxnet uses many small packets (which is bad) and e1000 uses fewer big packets.

Reply
0 Kudos
bertdb
Virtuoso
Virtuoso
Jump to solution

that might be an explanation why e1000 is faster in your tests, compared to vmxnet. I still believe vmxnet will have less overhead per packet, but if it forces the use of smaller packets, the net result might be that it's slower.

Logical, but unexpected. Can you escalate this as a support request ? I'd like to get VMware's stance on this issue.

ErMaC1
Expert
Expert
Jump to solution

The Push/Pull issue could be related to an OS issue. If you're using Windows, it's always better to Push rather than Pull over the network, because by default Explorer uses 64k chunks during a push but only 4k chunks during a pull.

Reply
0 Kudos
oreeh
Immortal
Immortal
Jump to solution

I still believe vmxnet will have less overhead per packet

Less overhead per packet is nice, but if you transfer nearly twice as much packets this performance advantage probably gets eaten up.

Logical, but unexpected

indeed

Can you escalate this as a support request ? I'd like to get VMware's stance on this issue.

Maybe some of the VMware guys here in the forum already know the answer, if not I'll file an SR.

Reply
0 Kudos
oreeh
Immortal
Immortal
Jump to solution

The Push/Pull issue could be related to an OS issue. If you're using Windows,...

This was my first assumption, but it was wrong.

I had this issue with all types of guests (Windows,Linux,BSD) with all types of servers (Windows,Linux,Solaris,BSD,Netware) and all kinds of protocols (SMB,FTP,NFS).

Reply
0 Kudos
bertdb
Virtuoso
Virtuoso
Jump to solution

was the mtu size equal in the vmxnet and e1000 tests ?

Reply
0 Kudos
oreeh
Immortal
Immortal
Jump to solution

Yes, in all test the MTU size was set to 1500

Reply
0 Kudos
oreeh
Immortal
Immortal
Jump to solution

some more opinions on this ?

Reply
0 Kudos
acr
Champion
Champion
Jump to solution

I too would like to know VMwares opinion on this.. We are seeing issues as well..??

Reply
0 Kudos
oreeh
Immortal
Immortal
Jump to solution

Follow-ups to the following thread please

http://www.vmware.com/community/thread.jspa?threadID=77227

Reply
0 Kudos
juchestyle
Commander
Commander
Jump to solution

Oreeh,

I was told by a VMware engineer that the reason there is a difference in the push pull is because of the os. We were told that Windows and other OS's have more resiliency with respect to handling fluctuating transfer rates. ESX however does not appear to have the resiliency that other, including linux os's have to handle fluctuations.

As I said in the other post, if your physical switches are set to auto, then ESX will talk to them and set the transfer rate to the lowest rate possible because ESX DOES NOT HAVE THE ABILITY TO FLUCTUATE TRANSFER rates like other os's.

This still doesn't sit well with me, but this is essentially what we were told.

What do you think?

Respectfully,

Kaizen!
juchestyle
Commander
Commander
Jump to solution

Hey Guys,

We actually have an SR open on this and we haven't really gotten anything on it from VMware.

They asked for our switch configurations, they asked what kind of hardware we were using, and if it was on the HCL. They helped us do some testing, but they really didn't help us fix our issue. At the end of the day we had to take a vm and push it back to a physical machine. The crazy thing is that it works better now on a physical. Same network, same physical switches, works better and it is because of the network transfer rates.

I think that half of the problem is the networking with the physical world, and I think the other half of the issue is for the network team to figure out. But I haven't really seen what I would call adequate explanations or help with this issue.

Respectfully,

Kaizen!
Reply
0 Kudos
Daryll
Expert
Expert
Jump to solution

At the request of the original poster, I am locking this thread. It's not a behavior issue or anything like that, the OP wants all of the comments/conversation in one place.

If you would like to follow up on this topic OR make comments, there is another thread that they would prefer you added to:

http://www.vmware.com/community/thread.jspa?threadID=77227

Thanks,

Daryll

Reply
0 Kudos