VMware Cloud Community
nzsteve
Enthusiast
Enthusiast
Jump to solution

Nexus 1000v Design - COS and vCenter

Hello everyone. First venture into the 1000V, so hoping someone can give an opinion as to whether my thoughts for the high level design are on the right lines?

  • Host servers have 4 x 10Gb ports and 2 x 1Gb ports

  • ESXi 4.1, Enterprise Plus, and 1000v licensed for each host

  • We'll be using the 1010 appliances, so no need for VSM modules

  • The 10Gb ports will be allocated to the 1000v vDS

    • Portprofiles created for vMotion and all required virtual machine VLANS

    • No IP storage requirement

  • The 1Gb ports on a standard vSwitch for VMkernel Management Port

    • Thought is that if the vCenter goes offline we can't manage the vDS (I assume this still applies to the 1000v as it does the VMware vDS?), so it's preferable to have hosts on a network that we can always access / change if required

    • The 1010 appliances will sit on this network (or be routable to/from it)

  • vCenter will be installed on a physical server

    • No vCenter Heartbeat, so only a single instance running

    • Due to the reliance on vCenter for 1000v port configuration it's probably a good idea not to have vCenter as a VM connected to a vDS port for its virtual machine traffic?

    • The physical vCenter will need connectivity to the VMkernel management port on each host and 1010 appliances

    • As an alternative, I guess we could put a VM port on the 1Gb connections for vCenter, but that would start complicating the design and management


      Does that all sound OK, or is it too conservative? Should we be looking again at adding the 1Gb ports into the 1000v uplinks and having the VMkernel management as on a separate port profile, and having a virtualised vCenter (which is what I would normally be using on new deployments).

      Thanks in advance for any feedback,

      Steve

      0 Kudos
      1 Solution

      Accepted Solutions
      RBurns-WIS
      Enthusiast
      Enthusiast
      Jump to solution

      Greetings Steve,

      Disclaimer - I work for Cisco and I'm pro consolidated networking :smileysilly:

      There have been many similar questions already posted on the Cisco 1000v Community.   There's heaps of similar questions and suggestions already available: https://www.myciscocommunity.com/community/products/nexus1000v?view=discussions

      Here's a recent post similar to yours.

      https://www.myciscocommunity.com/thread/17624?tstart=0

      In regards to some of your design considerations, many of them come down to your comfort & expertise levels.  Having vCenter as a physical host does allow you to keep it outside your virtual environment, but you'll lose the advatanges of better host resource utilization, and VMotion which make host maintenance & downtime further limited.  With the vCenter as a VM and running on the DVS, you'd likely make the service profile the VC is assigned to a "system vlan".  This will ensure your VC's network connectivity is ALWAYS forwarded, even if the VSMs are offline and VEM hosting the VC is reboot.  This added protection removes much of the risk of running the VC on the DVS it manages.

      In regards to your VSM availability, if both 1010's are down, yes you're not be able to make any configuration changes on the VSM.  Once active & configured though, there shouldn't be many situations when both VSMs are offline and you're in dire need of an immediate configuration change.  The VEM hosts can survive fine in headless mode (no VSM present) assuming they're not reboot before the VSM is restored.   Added protection to important virtual connections for management, IP storage and control traffic include the use of "system vlans" as detailed above.

      With every adapter being used comes additional management and the need for additional upstream switch ports.  With 4 x 10G adapters in each host I'd find it hard to justify also utilzing 1G connections, unless you opt for an "out of band" connections for your Management interfaces on a vSwitch.  If you're comfortable with running all your virtual interfaces including Management & VMotion on the DVS you can better consolodate your resources but using only your 10G adapters for uplinks.  By running everything on the 1000v you can easily apply QoS & limit bandwidth usage by Port Profiles - new features just released in 1000v version 1.4 make this incredibly easy to configure.  Definately worth checking out.  See my post here for some of the new features of the 1000v: https://www.myciscocommunity.com/thread/17120?tstart=0

      Hope this helps.

      Regards,

      Robert

      View solution in original post

      0 Kudos
      3 Replies
      RBurns-WIS
      Enthusiast
      Enthusiast
      Jump to solution

      Greetings Steve,

      Disclaimer - I work for Cisco and I'm pro consolidated networking :smileysilly:

      There have been many similar questions already posted on the Cisco 1000v Community.   There's heaps of similar questions and suggestions already available: https://www.myciscocommunity.com/community/products/nexus1000v?view=discussions

      Here's a recent post similar to yours.

      https://www.myciscocommunity.com/thread/17624?tstart=0

      In regards to some of your design considerations, many of them come down to your comfort & expertise levels.  Having vCenter as a physical host does allow you to keep it outside your virtual environment, but you'll lose the advatanges of better host resource utilization, and VMotion which make host maintenance & downtime further limited.  With the vCenter as a VM and running on the DVS, you'd likely make the service profile the VC is assigned to a "system vlan".  This will ensure your VC's network connectivity is ALWAYS forwarded, even if the VSMs are offline and VEM hosting the VC is reboot.  This added protection removes much of the risk of running the VC on the DVS it manages.

      In regards to your VSM availability, if both 1010's are down, yes you're not be able to make any configuration changes on the VSM.  Once active & configured though, there shouldn't be many situations when both VSMs are offline and you're in dire need of an immediate configuration change.  The VEM hosts can survive fine in headless mode (no VSM present) assuming they're not reboot before the VSM is restored.   Added protection to important virtual connections for management, IP storage and control traffic include the use of "system vlans" as detailed above.

      With every adapter being used comes additional management and the need for additional upstream switch ports.  With 4 x 10G adapters in each host I'd find it hard to justify also utilzing 1G connections, unless you opt for an "out of band" connections for your Management interfaces on a vSwitch.  If you're comfortable with running all your virtual interfaces including Management & VMotion on the DVS you can better consolodate your resources but using only your 10G adapters for uplinks.  By running everything on the 1000v you can easily apply QoS & limit bandwidth usage by Port Profiles - new features just released in 1000v version 1.4 make this incredibly easy to configure.  Definately worth checking out.  See my post here for some of the new features of the 1000v: https://www.myciscocommunity.com/thread/17120?tstart=0

      Hope this helps.

      Regards,

      Robert

      0 Kudos
      nzsteve
      Enthusiast
      Enthusiast
      Jump to solution

      Thanks Rob, that's really useful - lots to read an understand!

      The final link you've posted has some info on QoS Queing (WFQ). I presume you would recommend that we implement this and not NIOC on the vSphere side? Or is there an advantage to having both other them configured?

      Thanks,

      Steve

      0 Kudos
      RBurns-WIS
      Enthusiast
      Enthusiast
      Jump to solution

      It's limited to one or the other.  VMware's NIOC is applied to the their vDS while QoS/WFQ is applied to the Nexus 1000v DVS. The 1000v QoS features actually tie into the VMware API's for NIOC.  We do add some enhancements above & beyond what the VMware DVS can offer in IO control including utilizing up to 16 traffic classes for customized queing.   Just like NIOC, the 1000v can auto-classify Service Console/Management, VMotion, FT, NFS, iSCSI and additionally 1000v Control traffic.  Don't get me wrong, Vmware has done a great job with adding more options to traffic control compared to the Traffic Shaping options of previous versions.  Where the 1000v also excels is being able to set COS markings on your traffic so the priorities can be honored on upstream network devices.  The 1000v will also track your queuing & class stats so you can monitor, optimize & adjust them as required.

      One last trick up our sleeve is DFQ (dynamic-fair queing) algorithms we've implemented.  This ensures that each traffic class within a port channel gets it’s fair share of bandwidth and meets the bandwidth guarantees that are configured. The bandwidth reservation is done on each uplink, even when the uplinks are part of an aggregate Port Channel.  Take a single Vmotion for example which uses only a single uplink, even when uplinks are Port channeled.

      If the Vmotion traffic class asks for 30% of the aggregate bandwidth, and only a single Vmotion is happening, it gets 30% of a single uplink, not of the aggregate. So, in the event that uplink gets congested, we would give preferred treatment to Vmotion, by increasing the bandwidth pie for Vmotion traffic class. However, we should do so only if other classes on the same uplink are getting their due bandwidth from other uplinks.

      If you have any other questions, don't hesitate to post them.

      Regards,

      Robert

      0 Kudos