VMware Cloud Community
terran00925
Contributor
Contributor

Problem with NIC speeds

Has anyone encountered a problem similar to the following?

Server: HP DL360 G7

NIC: Onboard 4 port Broadcom NC382i Integrated Quad Port PCI Express Gigabit Server Adapter

NIC: Intel NC364T PCI Express Quad Port Gigabit Server Adapter

Host: ESXi 4.1 (Build: 260247)

All 8 ports of the ESXi server is connected to a 3750 which is configured as a stack with 3 x 3750 in total.

Ports 1 to 4 for the onboard NIC is connected to Switch 2 (3750) ports 1 to 4 and the Intel ports are connected to 5 to 8.

What I'm noticing is that the speed in which the ports are connected to are not all Gb.  I've tried configuring auto negotiate on the switch and the host and forcing the other side to Gb and I would only get 1 to 2 ports connect at 1Gb.  The others would become disconnected.  What's strange is that this is not consistent with the other ESXi server with the same configuration (there are 2 x DL360 G7) so for instance:

ESXi01

vmnic ports:

0 - 1000

1 - 100

2 - 100

3 - 1000

4 - 100

5 - 1000

6 - 100

7 - 100

ESXi02

vmnic ports:

0 - 1000

1 - 100

2 - 1000

3 - 100

4 - 100

5 - 100

6 - 1000

7 - 100

I cannot find any consistency in what is going on after pulling cables left right and center.  I think I've tried all combinations of auto negotiate and had coded settings without being able to figure out a pattern as to why this is happening.

I'm beginning to wonder if this is because all 8 ports are connected to a single 3750 in a single stack and I would prefer not to go with port channeling for this environment (tried that and couldn't get it going either).

Also, I've confirmed with the network engineer on the switch side that he has configured the trunk ports are per VMware's KB.

Any suggestions are welcomed as I'm extremely curious why this is happening.

Thanks.

0 Kudos
2 Replies
Josh26
Virtuoso
Virtuoso

terran00925 wrote:

  I've tried configuring auto negotiate on the switch and the host and forcing the other side to Gb and I would only get 1 to 2 ports connect at 1Gb. 

...

I'm beginning to wonder if this is because all 8 ports are connected to a single 3750 in a single stack and I would prefer not to go with port channeling for this environment (tried that and couldn't get it going either).

That would be a first problem - it's autonegotiate both ends, or hard coded both ends. Negotiating only one side is going to be prone to failure. Typical advise is to configure both ends to autonegotiate.

You are very unlikely to gain anything from attempting to run more than two cables to a single vswitch.

Finally, I would look at your cable. This is all CAT6 right?

0 Kudos
terran00925
Contributor
Contributor

I've tried auto to forced, forced to auto, auto to auto, forced to forced but they all exhibit the same problem. I only have 2 vmnics on each vSwitch and the cable is cat5e or higher.

0 Kudos