I have a few Dell 2950's with dual core 3Ghz Woordcrest Processors and utilizing a Datacore SANMelody SAN with thelocal hosts having 16GB of RAM of the FB-DIMM type that run our ESX infrastructure with the following NIC configuration:
1) 2 Onboard Dual BCM5700 GigE PHY's
2) 1 Dual Port Intel PRO/1000PT PCI-Express GigE Card
3) 1 Quad Port Intel PRO/1000PT PCI-EXpress GigE Card
We have several network teams that span all the above mentioned physical cards - we did this for load balancing as well as for fail-over reasons. As an off comment to a VMware support Engineer had him mention that NIC teams should only transcend (aka be created) with NICs that use the same chipset. So for our main VM vSwitch we should use only the PT1000 and not go use/go across the on-board bcm5700 module for that team configuration.
Also, we have a Cat65k thats running in hybrid mode (We're running IOS on the MSFC and CatOS on the Supervisor 2A module which supports a GigE blade. The ^5k Sup2a itself is running CatOS 8.5(7) in case you need to know.
We configured the physical ports assigned to our teams (say port 1,2,3) on the 65k GigE blade to be on the same VLAN and simply created a vSwitch within ESX 188.8.131.52829; we then physically attached these ports on the NICs to the corresponding assigned VLAN ports (vmnicX) expecting it to work and for a while it seemed to be working properly.
Doing a bit of work tells me that a bit more configuration is required on the Cisco 65k CatOS side and perhaps the vSwitch teaming side for load balancing and proper fail over given that when our service console port went down we could not could not contact the ESX server though it was part of a four port team and to make matters worse NONE of the virtualized servers were accessible until the vmnic port (in out case and for whatever reason it was vmnic6 which corresponded to the first port on the Intel Dual Pro/1000PT when i set the console port to the bcm5700 module when the ESX server was first installed was re-connected/re-established.).
I read this article from VMWorld 2006 on VI3 networking and teaming but it brushed the matter that I'm experiencing.
For purposes of throughness our teaming configuration is basically the default:
Under the NIC Teaming tab I have:
Network Failover Detection[/b] - "Link Status only" (i have a hunch this was is since the physical port was indeed moved from one port that was on one vlan to another - but there were another three physical ports assigned to this vSwitch which werent touched[/i])
vmnic0 - 1000/full - Networks - all in the same net - this is bcm0
vmnic2 - 1000/full - Networks - all in the same net - this is second port on the Intel PRO/1000T
vmnic3 - 1000/full - Networks - all in the same net - this is third port on the Intel Quad PRO/1000PT
vmnic7 - 1000/full - Networks - all in the same net - this is the 2nd port o the Intel PRO/1000PT
Hence my question - is going across all avail NICs such a good idea and what about the VMware technical rep's comment concerning not making vSwitches with multiple vendors chipsets? I could have sworn that the vmkernel would have put in an abstraction/hypervisor layer for this to work seamlessly.
Can anyone chime in on this multi-staged problem (the one about the multiple nics/teams, the console failure even though it was part of a vSwitch team, and the 65k config issue - since its CatOS i'd have to make the switch gigE module LACP as a whole \[as only IOS allows a per port Etherchannel/LACP] and forgo EtherChannel alltogeter for 802.3ad functionality?
Many thanks to those that can help.
gables engineering, inc
coral gables, fl
wanabee network/SAN/systems engineer
Message was edited by: gjr - looking how to add proper formatting - dunno why its all bunched up like this.
Message was edited by:
Could anyone chime in about the three grevious issues at hand?
PS: And for that matter can someone pls tell me why the formatting is like that above? I put proper spaces and carreverywhere and the text gets bunched up as you above.
Message was edited by: gjr