VMware Cloud Community
scottm_occ
Enthusiast
Enthusiast

1000v Help

Hello,

We are in process of trying to implement the 1000v setup. I would like to say in advance I am not the network specialist on our team so please excuse anything I do not state correctly. We have 16 blade hosts with esxi 4.1, each blade has 4 pNics - 2 @ 10 Gb and 2 @ 1 Gb.

My initial thought is that 1 of the 1Gb Nic's is used for service console, and 1 for vmotion. Then the 2 @ 10gb will be used for vm guest traffic. I am confused as to the best practice for service console traffic - should the be kept on a seperate local vswitch or be migrated to the 1000v vDS?

Reply
0 Kudos
8 Replies
vmroyale
Immortal
Immortal

Hello.

Check out dvSwitch? at Duncan Epping's site for some good information.

Good Luck!

Brian Atkinson | vExpert | VMTN Moderator | Author of "VCP5-DCV VMware Certified Professional-Data Center Virtualization on vSphere 5.5 Study Guide: VCP-550" | @vmroyale | http://vmroyale.com
Reply
0 Kudos
lwatta
Hot Shot
Hot Shot

Best practice from Cisco side is to put all connections on the N1KV DVS. We QA with that config and are comfortable saying its a best practice to put all connections on the N1KV.

That being said since you are new to N1KV there is nothing wrong with starting out with using the N1KV for only VM data traffic and keeping control traffic seperate. Once you are comfortable you can either stay in that config or migrate the connections to the N1KV. We support migration and you can run VMware vSwitch or VMware DVS at the same time you are running N1KV. You can't share the uplink connections but you can have all three running on the ESX host at the same time without any issues.

Should you decide to migrate all connections to the N1KV just make sure you understand a few concepts.

Make sure to read all the options for port-channels. It will mostly depend on what your upstream switches can support but use either LACP or vPC Mac Pinning. Keep in mind you can't mix 1G and 10G links into a port-channel. They have to be the same speed.

Make sure you understand System VLANs. Essentially they allow network connectivity even if the VEM cannot communicate to the VSM. It's very important that the VLAN your SC is on be passed as a system VLAN on the uplink and the vm port-profile.

louis

Reply
0 Kudos
francesme2
Contributor
Contributor

I am working with Scott to setup the Cisco 1000v.

The upstream switches support LACP. The upstream switches are 3120G/X. Each 3120X has a 10GB connection back to 4948.

4948 port channel configure to 3120

interface Port-channel3

description Blade

switchport

switchport trunk native vlan 500

switchport trunk allowed vlan 5,500,2004,2008,2012,2016,2020

switchport mode trunk

3120 port channel configuration to 4948

interface Port-channel3

description BLADE

switchport trunk native vlan 500

switchport trunk allowed vlan 5,500,2004,2008,2012,2016,2020

switchport mode trunk

Trunk configurations appear to be correct.

Both VSM are installed however upon running VUM to install the VEM we get the following error:) VDS operation failed on host aluminum.owens.edu, got (vmodl.fault.SystemError) exception) then we lose all communication with the ESX server.


After looking at the config of the 1000v VSM, it appears it maybe a system- uplink issue.

Sh port            -profile usage- no system-uplink displayed.

Sh int brief no VEM displayed.

I believe I have an issue with system-uplink communication.

Attached is the port-profiles that I created for system-uplink

port-profile system-uplink

type: Ethernet

description:

status: enabled

max-ports: 32

inherit:

config attributes:

  switchport mode trunk

  channel-group auto mode on sub-group cdp

  no shutdown

evaluated config attributes:

  switchport mode trunk

  channel-group auto mode on sub-group cdp

  no shutdown

assigned interfaces:

port-group: system-uplink

system vlans: none

capability l3control: no

capability iscsi-multipath: no

port-profile role: none

port-binding: static

Cisco has conflicting documentation for the Cisco 1000v.

If possible, can you offer insight as to if this configure looks correct?

Thank you,


Frances
Reply
0 Kudos
lwatta
Hot Shot
Hot Shot

Which switch is connecting to the ESX host? the 3120 or 4948? What do the ports from that switch to the ESX host look like?

Also can I get the show run from the N1KV?

louis

Reply
0 Kudos
francesme2
Contributor
Contributor

3120G (example of how the ports are configured)

interface GigabitEthernet3/0/1
switchport trunk allowed vlan 2004,2008,2012,2016,2020
switchport mode trunk
switchport nonegotiate
spanning-tree portfast trunk
end

4948 is the upstream switch connected to the 3120 (stacked).

Cisco 1000v

version 4.2(1)SV1(4)
no feature telnet

banner motd #Nexus 1000v Switch#

ssh key rsa 2048
ip domain-lookup
ip domain-lookup
hostname VSM1

vrf context management
  ip route 0.0.0.0/0 172.21.11.254
vlan 1
vlan 500
  name server
vlan 2004
  name vmotion
vlan 2008
  name mgmt
vlan 2012
  name packet/ctrl
port-channel load-balance ethernet source-mac
port-profile default max-ports 32
port-profile type ethernet Unused_Or_Quarantine_Uplink
  vmware port-group
  shutdown
  description Port-group created for Nexus1000V internal usage. Do not use.
  state enabled
port-profile type vethernet Unused_Or_Quarantine_Veth
  vmware port-group
  shutdown
  description Port-group created for Nexus1000V internal usage. Do not use.
  state enabled
port-profile type ethernet control-uplink
  vmware port-group
  switchport mode trunk
  channel-group auto mode on sub-group cdp
  no shutdown
  state enabled
port-profile type ethernet packet-uplink
  vmware port-group
  switchport mode trunk
  channel-group auto mode on sub-group cdp
  no shutdown
  state enabled
port-profile type ethernet vmotion-uplink
  vmware port-group
  switchport mode trunk
  channel-group auto mode on sub-group cdp
  no shutdown
  state enabled
port-profile type ethernet VMguest-uplink
  vmware port-group
  switchport mode trunk
  channel-group auto mode on sub-group cdp
  no shutdown
  state enabled
port-profile type vethernet vVMguest-uplink
  vmware port-group
  switchport mode access
  switchport access vlan 500
  no shutdown
  state enabled
port-profile type vethernet vVmotion-uplink
  vmware port-group
  switchport mode access
  switchport access vlan 2004
  no shutdown
  state enabled
port-profile type ethernet mgmt-uplink
  vmware port-group
  switchport mode trunk
  channel-group auto mode on sub-group cdp
  no shutdown
  state enabled
port-profile type vethernet vmgmt-uplink
  vmware port-group
  switchport mode access
  switchport access vlan 2008
  no shutdown
  state enabled
port-profile type ethernet system-uplink
  vmware port-group
  switchport mode trunk
  channel-group auto mode on sub-group cdp
  no shutdown
  state enabled

vdc VSM1 id 1
  limit-resource vlan minimum 16 maximum 2049
  limit-resource monitor-session minimum 0 maximum 2
  limit-resource vrf minimum 16 maximum 8192
  limit-resource port-channel minimum 0 maximum 768
  limit-resource u4route-mem minimum 32 maximum 32
  limit-resource u6route-mem minimum 16 maximum 16
  limit-resource m4route-mem minimum 58 maximum 58
  limit-resource m6route-mem minimum 8 maximum 8

interface mgmt0
  ip address 172.21.8.51/22

interface control0
line console
boot kickstart bootflash:/nexus-1000v-kickstart-mz.4.2.1.SV1.4.bin sup-1
boot system bootflash:/nexus-1000v-mz.4.2.1.SV1.4.bin sup-1
boot kickstart bootflash:/nexus-1000v-kickstart-mz.4.2.1.SV1.4.bin sup-2
boot system bootflash:/nexus-1000v-mz.4.2.1.SV1.4.bin sup-2
svs-domain
  domain id 50
  control vlan 2012
  packet vlan 2012
  svs mode L2
svs connection vcenter
  protocol vmware-vim
  remote ip address 205.133.0.65 port 80
  vmware dvs uuid "85 56 12 50 02 ce 04 e1-6d 65 31 c4 aa b9 89 24" datacenter-name Toledo-P
rod
  connect
vnm-policy-agent
  registration-ip 0.0.0.0
  shared-secret **********
  log-level

Which switch is connecting to the ESX host? the 3120 or 4948? What do the ports from that switch to the ESX host look like?

Also can I get the show run from the N1KV?

louis

Reply
0 Kudos
lwatta
Hot Shot
Hot Shot

Ok we need to address a couple of things.

First "port-profile type eth" get assigned to physical nics. You have quite a few "eth" type port-profiles defined. Generally most people define one or maybe two "eth" port-profiles and trunk the vlans they need across the link. To start I would stick with just the "system-uplink" you defined.

port-profile type ethernet system-uplink
  vmware port-group
  switchport mode trunk
  channel-group auto mode on sub-group cdp
  no shutdown
  state enabled

I would change this port-profile to look like

port-profile type ethernet system-uplink
  vmware port-group
  switchport mode trunk

  switchport trunk allowed vlan all
  channel-group auto mode on mac-pinning
  no shutdown

  system vlan 2004, 2008, 2012
  state enabled

I changed the channel-group from CDP to mac-pinning. This is preferred method of channel-group because it acts more like nic teaming on a vSwitch.

I also added system vlan 2004, 2008, 2012. This command directs the VEM to pass traffic on these vlans before it is programmed by the VEM. You always want control, packet, mgmt, vmk interfaces to have a system vlan set.

Start off by adding a free nic from the ESX host to the N1KV. Don't migrate any connections(like Service Console and VMK) just add the NIC and verify that the host shows up as a module on the VSM. Once that works add some port-profiles for VMs, migrate them by editing the VM settinngs and verify that you have network connectivity.

louis

Reply
0 Kudos
scottm_occ
Enthusiast
Enthusiast

We have the capability to add all the NIC's at this time as the 16 hosts in the blade cluster are pre-prod at this time. Ideally I would like to have the vmk on pnic3 with pnic2 available as standby (failover), vmotion on pnic2 with pnic3 as the standby for that. Then pnic 0 and 1 teamed for the guest traffic.

Reply
0 Kudos
lwatta
Hot Shot
Hot Shot

I understand. That's possible with N1KV just a little different than the way you do it for vSwitch or VMware DVS.

To start I would stick with a vSwitch and N1KV. Keep your vswif and vmk interfaces on the vSwitch. Get comfortable with the concepts and design of the N1KV and then migrate the vSwitch to the N1KV once you are ready.

When you do migrate everything you will need two uplink port-profiles. You can't mix different speed NICs into a port-channel, so you will need one port-profile with the 1GB nics and another port-channel with the 10GB nics. If you use vPC-MAC Pinning port-channel you can specify which traffic you want on a particular uplink. Both uplinks are active at the same time but you can pin traffic to a particular link. If that link goes down it automatically fails over to the other active link.

One big caveat is that you can not duplicate VLANs across uplinks. So the uplink with 1G links cannot have the same VLAN list as the uplink with the 10G links. Currently the N1KV has no way to know which uplink should be used with the same VLAN is passed on both uplinks. You'll end up seeing duplicate broadcasts and it will fill up the logs pretty quick. So be aware of that limitation.

louis

Reply
0 Kudos