DyJohnnY
Enthusiast
Enthusiast

how many distributed 1000V switches per vcenter?

Jump to solution

Hi,

This may sound like a easy, silly question, but how many 1000V distributed switches can I create in a vCenter instance?

I heard about a limitation like " 1 x 1000v per vCenter" but could not confirm it with any info found on the Internet.

Thanks,

Ionut

IonutN
0 Kudos
1 Solution

Accepted Solutions
p0wertje
Hot Shot
Hot Shot

I found this on the Cisco site:

http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9902/qa_c67-556624.html

Q. The service console and virtual machines typically are connected to two different vSwitches. Can two Cisco Nexus 1000V instances be started?

A. The Cisco Nexus 1000V architecture, with a single VEM per host and advance networking capabilities, allows proper segmentation of VMware ESX functions while still providing a consistent management entity. Only one VEM instance is required per physical host.

System Overview

The Cisco Nexus 1000V Series is a software-based switch that spans multiple hosts running VMware ESX or ESXi 4.0. It consists of two components: the Virtual Supervisor Module, or VSM, and the Virtual Ethernet Module, or VEM. The VSMs are deployed in pairs that act as the switch's supervisors. One or more VEMs are deployed; these act like line cards within the switch.

The VSM is a virtual appliance that can be installed independent of the VEM: that is, the VSM can run on a VMware ESX server that does not have the VEM installed. The VEM is installed on each VMware ESX server to provide packet-forwarding capability. The VSM pair and VEMs make up a single Cisco Nexus 1000V Series Switch, which appears as a single modular switch to the network administrator.

Each instance of the Cisco Nexus 1000V Series Switch is represented in VMware vCenter Server as a vNetwork Distributed Switch, or vDS. A vDS is a VMware concept that enables a single virtual switch to span multiple VMware ESX hosts. The Cisco Nexus 1000V Series is created in VMware vCenter Server by establishing a link between the VSM and VMware vCenter Server using the VMware VIM API.

VMware's management hierarchy is divided into two main elements: a data center and a cluster. A data center contains all components of a VMware deployment, including hosts, virtual machines, and network switches, including the Cisco Nexus 1000V Series.

Note: A VMware ESX host can have only a single VEM installed.

Cheers,
p0wertje | VCIX6-NV | JNCIS-ENT | vExpert
Please kudo helpful posts and mark the thread as solved if solved

View solution in original post

0 Kudos
6 Replies
p0wertje
Hot Shot
Hot Shot

I found this on the Cisco site:

http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9902/qa_c67-556624.html

Q. The service console and virtual machines typically are connected to two different vSwitches. Can two Cisco Nexus 1000V instances be started?

A. The Cisco Nexus 1000V architecture, with a single VEM per host and advance networking capabilities, allows proper segmentation of VMware ESX functions while still providing a consistent management entity. Only one VEM instance is required per physical host.

System Overview

The Cisco Nexus 1000V Series is a software-based switch that spans multiple hosts running VMware ESX or ESXi 4.0. It consists of two components: the Virtual Supervisor Module, or VSM, and the Virtual Ethernet Module, or VEM. The VSMs are deployed in pairs that act as the switch's supervisors. One or more VEMs are deployed; these act like line cards within the switch.

The VSM is a virtual appliance that can be installed independent of the VEM: that is, the VSM can run on a VMware ESX server that does not have the VEM installed. The VEM is installed on each VMware ESX server to provide packet-forwarding capability. The VSM pair and VEMs make up a single Cisco Nexus 1000V Series Switch, which appears as a single modular switch to the network administrator.

Each instance of the Cisco Nexus 1000V Series Switch is represented in VMware vCenter Server as a vNetwork Distributed Switch, or vDS. A vDS is a VMware concept that enables a single virtual switch to span multiple VMware ESX hosts. The Cisco Nexus 1000V Series is created in VMware vCenter Server by establishing a link between the VSM and VMware vCenter Server using the VMware VIM API.

VMware's management hierarchy is divided into two main elements: a data center and a cluster. A data center contains all components of a VMware deployment, including hosts, virtual machines, and network switches, including the Cisco Nexus 1000V Series.

Note: A VMware ESX host can have only a single VEM installed.

Cheers,
p0wertje | VCIX6-NV | JNCIS-ENT | vExpert
Please kudo helpful posts and mark the thread as solved if solved

View solution in original post

0 Kudos
vmroyale
Immortal
Immortal

Hello.

This may sound like a easy, silly question, but how many 1000V distributed switches can I create in a vCenter instance?

16 in 4.0 and 32 in 4.1 - you can find this information in the Configuration Maximums 4.0 or Configuration Maximums 4.1 documents.

Good Luck!

Brian Atkinson | vExpert | VMTN Moderator | Author of "VCP5-DCV VMware Certified Professional-Data Center Virtualization on vSphere 5.5 Study Guide: VCP-550" | @vmroyale | http://vmroyale.com
lwatta
Hot Shot
Hot Shot

Just to add a Nexus 1000v instance is tied to a VMware vCenter Datacenter not a cluster and can support up to 64 ESX hosts. This is true in 4.0 and 4.1

louis

DyJohnnY
Enthusiast
Enthusiast

Hi,

Thanks everyone for the fast feedback.

So essentially each host can only have 1 VEM attached to it, which can be attached to one VSM at a time.

Also there can be up to 16 vDS's that i knew, only now I also know 1000v is just like any other vDS.

In this case it does make sense to have 2 1000v's, one for 1 datacenter and one for the other, right?

thanks for all the help,

ionut

IonutN
0 Kudos
lwatta
Hot Shot
Hot Shot

Your summary is correct. If you have two vCenter Datacenters you would have two VSMs. Keep in mind we license based off the number of VEMs not the number of VSMs so it won't cost you more to have multiple VSMs. But the licenses do get tied to a host id of the VSM so it's not easy to move licenses around from one VSM to another.

louis

0 Kudos
MichaelW007
Enthusiast
Enthusiast

The referenced configuration maximums are for a VMware vDS, not the Nexus 1000v. Up to and including 4.2(1) SV1(4a) the maximum number of Nexus 1000v vDS per vCenter was 12. The new version of the Nexus 1000v, which was released on 31/01/2012 4.2(1) SV1(5.1) includes 12 Nexus 1000v vDS per vCenter when using vCloud Director and 32 Nexus 1000v vDS per vCenter when not using vCloud Director. Note the maximum VMware vDS per vCenter in vSphere 5 is still 32. The maximum number of Nexus 1000v vDS per vCenter Datacentre is 1. So still a 1:1 mapping.

0 Kudos