VMware Cloud Community
i4004
Enthusiast
Enthusiast

Teaming and ESX 3.0.1 - Issues and Questions - How Are You Doing It?

I have a few Dell 2950's with dual core 3Ghz Woordcrest Processors and utilizing a Datacore SANMelody SAN with thelocal hosts having 16GB of RAM of the FB-DIMM type that run our ESX infrastructure with the following NIC configuration:

1) 2 Onboard Dual BCM5700 GigE PHY's

2) 1 Dual Port Intel PRO/1000PT PCI-Express GigE Card

3) 1 Quad Port Intel PRO/1000PT PCI-EXpress GigE Card

We have several network teams that span all the above mentioned physical cards - we did this for load balancing as well as for fail-over reasons. As an off comment to a VMware support Engineer had him mention that NIC teams should only transcend (aka be created) with NICs that use the same chipset. So for our main VM vSwitch we should use only the PT1000 and not go use/go across the on-board bcm5700 module for that team configuration.

Can anyone chime in on this multi-staged problem (the one about the multiple nics/teams, the console failure even though it was part of a vSwitch team, and the 65k config issue - since its CatOS i'd have to make the switch gigE module LACP as a whole \[as only IOS allows a per port Etherchannel/LACP] and forgo EtherChannel alltogeter for 802.3ad functionality?

We configured the physical ports assigned to our teams (say port 1,2,3) on the 65k GigE blade to be on the same VLAN and simply created a vSwitch within ESX 3.0.1.42829; we then physically attached these ports on the NICs to the corresponding assigned VLAN ports (vmnicX) expecting it to work and for a while it seemed to be working properly.

Doing a bit of work tells me that a bit more configuration is required on the Cisco 65k CatOS side and perhaps the vSwitch teaming side for load balancing and proper fail over given that when our service console port went down we could not could not contact the ESX server though it was part of a four port team and to make matters worse NONE of the virtualized servers were accessible until the vmnic port (in out case and for whatever reason it was vmnic6 which corresponded to the first port on the Intel Dual Pro/1000PT when i set the console port to the bcm5700 module when the ESX server was first installed was re-connected/re-established.

I read this article from VMWorld 2006 PowerPoint Slide Presentation that was on VI3 networking and had parts on teaming but it barely brushed the matter that I'm experiencing.

For purposes of throughness our teaming configuration is basically the default:

Under the NIC Teaming tab I have:

Load Balancing - "Route based on the originating port ID"

Network Failover Detection - "Link Status only" ( i have a hunch this was is since the physical port was indeed moved from one port that was on one vlan to another - but there were another three physical ports assigned to this vSwitch which werent touched )

Rolling Failover - "No"

Failover Order - Active Adapters

vmnic0 - 1000/full - Networks - all in the same net - this is bcm0

vmnic2 - 1000/full - Networks - all in the same net - this is second port on the Intel PRO/1000T

vmnic3 - 1000/full - Networks - all in the same net - this is third port on the Intel Quad PRO/1000PT

vmnic7 - 1000/full - Networks - all in the same net - this is the 2nd port o the Intel PRO/1000PT

Hence my question - is going across all avail NICs such a good idea and what about the VMware technical rep's comment concerning not making vSwitches with multiple vendors chipsets? I could have sworn that the vmkernel would have put in an abstraction/hypervisor layer for this to work seamlessly.

Also, we have a Cat65k thats running in hybrid mode (We're running IOS on the MSFC and CatOS on the Supervisor 2A module which supports a GigE blade. The ^5k Sup2a itself is running CatOS 8.5(7) in case you need to know.

This morning I spoke to a rep concerning a Converter 1.0.1 problem (im getting a (sysimage.faile.fileopenerror) whenconverting a Windows 2000 SP4 physical machine into ESX directly

Many thanks to those that can help.

Cordially,

gerson ricardo

gables engineering, inc

coral gables, fl

wanabee network/SAN/systems engineer

0 Kudos
1 Reply
i4004
Enthusiast
Enthusiast

The answer to the sysimage.faile.fileopenerror was that i didnt have a VMkernel/Service Console pointed to the DMZ zone. All i had was the VC 2.0.1 server with a network port connected to the DMZ and thought that VC would send the machine data that was being converted to my ESX server. Wrong.

I needed the actual ESX server to have both VMkernel and a Service Console interface into the DMZ area as well - in other words the machine to be converted needs to actuall be able to reach the ESX server's VMkernel/SC to migrate - being able to reach your Virtual Center Server is simply not enough.

/gjr

0 Kudos