VMware Cloud Community
mrudloff
Enthusiast
Enthusiast

Best practise when using 3750G Gigabit switches ?

I am by no means a network guru and just really know my way "around" Cisco devices

It is getting better, but heh, sometimes you get thrown into cold water and you're asked to swim. Anyway, enough gibbajabba

Here is my question really.

I just started looking into implementing the Nexus on a new planned cluster simply due to the possibility of ACLs.

Here is my environment. The ESX Server are connected to a 3750G stack.

3 Hosts running in a DRS / HA cluster. Each host has 6 NICs on three vSwitches

1 vSwitch with 2 NICs are used for the Service Console

1 vSwitch with 2 NICs are used for the VMKernel interfaces connecting to an iSCSI SAN (we might use QLogic cards instead - that is undecided)

1 vSwitch with 2 NICs are used for the Virtual Machine Network

Where one nic / vSwitch is connected to one switch of the stack.

The switch ports itself are trunked so each portgroup currently requires its own VLAN.

I am now at a stage where I have to configure the uplinks and port profiles of the nexus and migrate existing port groups over.

Is this correct that for the above I create an uplink port profile with

conf

port-profile uplinks

capability uplink

vmware port-group

switchport mode trunk

switchport trunk allowed vlan all

But what channel-group mode should I use with the 3750G ? Someone suggested

channel-group auto mode on sub-group cdp

Is this correct ? I understand when I intend to add multiple nics at a later stage I need to configure port-channels - I suppose that is on the physical switch ?

Also is this correct that when I connect both uplinks for testing purposes to the same switch that I need to use

channel-group auto mode on

instead ?

Here is an example of a port-channel summary of another 3750G stack

-


Flags: D - down P - in port-channel

I - stand-alone s - suspended

H - Hot-standby (LACP only)

R - Layer3 S - Layer2

U - in use f - failed to allocate aggregator

u - unsuitable for bundling

w - waiting to be aggregated

d - default port

Number of channel-groups in use: 48

Number of aggregators: 48

Group Port-channel Protocol Ports

-


----


--
+
--


10 Po10(SU) - Gi4/0/10(P) Gi5/0/10(P)

-


And the port-channel itself

-


switchport trunk encapsulation dot1q

switchport mode trunk

-


And the interfaces itself

-


switchport trunk encapsulation dot1q

switchport mode trunk

spanning-tree portfast

-


About the Service Console Port-Profile

Would this be correct in this case ?

-


conf

port-profile ServiceConsole

vmware port-group

switchport mode access

switchport access vlan xyz

-


I would also configure the Virtual Machine Port-Profile the exact same way as the ServiceConsole - is this correct ? Or would this have to be in the system vlan ?

The Service Console and VMKernel VLAN are basically in the same VLAN as the private network of each virtual machine (they are connected to a two networks) where each virtual machine should also have access to a second VLAN which is then connected to the internet.

We are able to use a different VLAN for the VMKernel ports (as they will be connected directly to the SAN), but the Service Console VLAN has to be in the same VLAN as one of the Virtual Machine Networks.

Does this make sense at all ? Sorry to ask so many questions and hopefully someone has the time to answer but I want to do it right the first time

We also consider leaving the iSCSI / VMKernel port-groups on legacy vSwitches and just move the Virtual Machine network onto the Nexus ... Thoughts ?

0 Kudos
0 Replies