I've spend some time on a server with terrible performance.
I've tried all manner of things*, and eventually found this thread:
I didn't believe simply creating a new portgroup on a new vswitch would make any difference, but the server had spare NICs, so I tried it.
I've used to CLI to check the MTU of the old and new switches - they are the same. I cannot for the life of me see why else performance would increase so drastically when moving from one vSwitch to an identically configured (as far as I can see) vSwitch.
I would suggest the pNIC, or the cable, or similar, but I have two identical servers (inherited, so I am unfamiliar with any past configuration). The switch is actually unmanaged - it's impossible for the switch to be configured differently for the two ports.
* Performance seemed limited at 30Mbit. The switch is gigabit. We had reduced the load balancing down to "route by port ID" and a single pNIC for simplicity, no change. We are now getting true Gigabit throughput.
I'm a little bit confused. You are saying this is an unmanaged physical switch and also "We had reduced the load balancing down to "route by port ID" (which is actually the only policy applicable for an unmanaged switch). What was configured before? Please elaborate on your current network configuration to understand the issue.
I'm fully aware this is confusing. But I have two servers exhibiting this behaviour, and a forum thread where another user found that behaviour. The "before" image is very simple:
Virtual Machine Group
VM Management Group
In other words, literally the default configuration. pnic0 is connected to the same physical switch as pnic1. Now I added this:
Virtual Machine group 2
Machines connected to the original "Virtual Machine Group" get around 7Mbit/sec using iperf to an outside physical server. Moving a VM to Virtual Machine group 2, it gets practically gigabit wire speed on the same test.
I'm just bumping this for more information.
I've checked the cabling etc and no issues were found. Moving a machine from "Virtual Machine Group 2" to "Virtual machine Group" seems to cripple its performance on both identical servers and I cannot see why.
I used vicfg-vswitch to check the MTU - it's the default on both switches.
I'm sure something went on before I inherited this system but I just cannot see what.