VMware Cloud Community
DSeaman
Enthusiast
Enthusiast
Jump to solution

Separate 1000v VSM cluster with no VEMs?

I was reading through some Cisco Nexus 1000v documentation and it seems best practices recommends against running VSMs on ESX hosts which have VEMs that it manages. For example, the environment could be a 4-node ESX cluster with two VSMs and each node has a VEM. If this is bad ju-ju, it seems I have two primary alternatives:

1) Establish a 2-node ESX cluster that only uses standard vSwitches which host both VSMs. This cluster would have full network connectivity to the ESXi hosts which do have the VEMs and fully utilize the 1000v features. All other production VMs would run on the 1000v enabled cluster.

2) Buy a pair of Cisco Nexus 1010s, which would host the redundant VSMs.

If the VSM/VEMs have a tight dependency on vCenter to function, then it would probably make sense to host vCenter/SQL and one AD server on the 2-node VSM cluster so that it's self-sufficient? Basically the 2-node VSM cluster is the control center for your networking and VMs, which in turn manages the main primary cluster(s) of production VMs?

Derek Seaman
0 Kudos
1 Solution

Accepted Solutions
RBurns-WIS
Enthusiast
Enthusiast
Jump to solution

There's absolutely no problems with running your VSM on your production cluster.  Do you really want to burn two hosts for hosting just your VSMs?  If your VMware infrastructure already dedicates a cluster for managment VMs and devices - this might make sense, otherwise let them run with the rest of your VMs.  VSMs come pre-programmed with CPU/Memory reservation so performance shouldn't be an issue.

I've seen hundreds of customers operating like this.  It's been a fully supported configuration since version SV1.2.

As for the Neuxs 1010, yeah these are great, but only because they offer additional features such as the NAM and other service blades on the horizon.  I'd rather take the efficiency and mobility (VMotion) of a virtual Sup, then having to dedicate two more power consuming servers.

Regards,

Robert

View solution in original post

0 Kudos
5 Replies
logiboy123
Expert
Expert
Jump to solution

Can you copy and paste the section outlining the issues with putting a VSM on a cluster running VEM's?

0 Kudos
DSeaman
Enthusiast
Enthusiast
Jump to solution

Nexus 1000v Getting Started Guide, section 5:

The VSM and VEM can run on the same host. In this case, the VSM communicates with the co-located VEM and other VEMs in the network using its own switch. The following are examples of networks where you could run a VSM on its own host:


  • Environments where the server administrator can guarantee that the VSM VM will not be mistakenly powered down or reconfigured.
  • Test and demonstration setups.


To avoid any possibility of losing communication with its VEMs, it is recommended that the VSM be installed on a separately-managed server.

The following are examples of networks where you are advised to run your VSM on a separate host from its VEMs:
  • Environments where the server administrator cannot guarantee the virtual machine for the VSM will be available and will not be modified.
  • Environments where server resources (CPU, memory, network bandwidth) cannot be guaranteed for the VSM.
  • Environments where network administrators have their own ESX server hosts to run network services.
  • Environments where network administrators need to quickly create, destroy, and move VSMs without server administrator interaction.

Derek Seaman
0 Kudos
logiboy123
Expert
Expert
Jump to solution

So the bigger concerns are that someone may power down the VSM and/or you run up against resource contention.

You could buy the physical VSM server from Cisco if this is a major concern, but from the sounds of your setup I wouldn't be worred about putting my VSM primary and secondary in a cluster that it manages.

0 Kudos
RBurns-WIS
Enthusiast
Enthusiast
Jump to solution

There's absolutely no problems with running your VSM on your production cluster.  Do you really want to burn two hosts for hosting just your VSMs?  If your VMware infrastructure already dedicates a cluster for managment VMs and devices - this might make sense, otherwise let them run with the rest of your VMs.  VSMs come pre-programmed with CPU/Memory reservation so performance shouldn't be an issue.

I've seen hundreds of customers operating like this.  It's been a fully supported configuration since version SV1.2.

As for the Neuxs 1010, yeah these are great, but only because they offer additional features such as the NAM and other service blades on the horizon.  I'd rather take the efficiency and mobility (VMotion) of a virtual Sup, then having to dedicate two more power consuming servers.

Regards,

Robert

0 Kudos
DSeaman
Enthusiast
Enthusiast
Jump to solution

Cool thanks...will run them on the production clusters. 

Derek Seaman
0 Kudos