Couple questions on this:
1)Is there a down-side for using DVS for management? I seem to remember at least possibly in the earlier days of DVS that there was a dependency on vcenter - so that if vcenter was down, the hosts lost connectivity. I'm pretty sure that is no longer the case? But ... are there any actual down-sides to using DVS for management?
2)Is there any reason NOT to use a 1000v vswitch for management port groups?
1) There is still a dependency on vCenter for management but if vCenter is down the vDS will still continue to function your just will not be able to manage the switch.
2) 1000v is a Cisco developed vDS - so the same applies and there should be no issues using it for the management port groups.
1) There is still a dependency on vCenter for management but if vCenter is down the vDS will still continue to function your just will not be able to manage the switch.
2) 1000v is a Cisco developed vDS - so the same applies and there should be no issues using it for the management port groups.
While it has gotten much better over time, there is still a risk putting mgmt. on the DVS. The most common issue you could face is the inability to get a dvPort for a VM powering on after loss of the VC or VCDB for example. With that said, 99% of my 1000v deployments have everything (mgmt, vMotion, VM traffic, etc.) all on the same 10Gb bundle. It's a tradeoff, but makes financial sense not to have additional ports, cabling, etc. If it's an all copper deployment, it's a lot more tempting to separate mgmt. because you have so many cheap ports available. On 10Gb though it's usually all in and a roll of the dice.
The one place I see this explicitly separated is in dedicated management farms (i.e. 2 to 3 ESXi hosts who's sole purpose is managing the virtual infrastructure (i.e. vCenters, syslogs, monitoring, and perhaps email / alerting, etc.)). Creating a mgmt. farm is becoming more and more popular with companies that can afford it and respect the granularity and levels of separation it provides. Other than that, the advice is really just go all-in, but learn how to steal a NIC from your DVS to bring it up on a VSS in case you get a VM that's not able to obtain a dvPort. This scenario is especially common on virtualized VC and VCDB. You must study this well.
I use DVS for my management(in fact I have no standard switches at all). My primary reason is for LLDP.
No issues putting MGMT on the nexus 1000v now. You just want to make sure that you use the system vlan command on your MGMT, vMotion, storage, and n1kv layer 3 control port profiles. You also need to use the system vlan in the ethernet uplink port profile.
So if VLAN 10 is vmotion, 11 is NFS, 20 is management (and n1kv-L3) it would look like this, in addition to all of the other port profile information.
port-profile type ethernet uplink
system vlan 10,11,20
port profile type vethernet vmotion
system vlan 10
port profile type vethernet NFS
system vlan 11
port profile type vethernet MGMT
system vlan 20
port profile type vethernet nikv-l3
system vlan 20
Hi Heath,
Nice! I just found your blog. You have some great 1000v content there!
Thank's, but I'm the worst blogger in the world. I mostly use it to record the solution to problems I couldn't find with google, so hopefully it will save someone else some time.
Let me know if you have any 1kv questions though, I've been running it since Jan 2012 with about 4k VMs on the 1000v. We just bought NSX for one of our environments, so that will probably start our transition off of the 1000v. I'm going to miss it, I'm having to learn powershell / powercli to be able to do anything fast with the VDS.
HeathReynolds wrote:
We just bought NSX for one of our environments, so that will probably start our transition off of the 1000v.
Bummer! Well that should be exciting though. Best of luck and I hope to see some content with your thoughts once you get it going.