I am currently involved in a project where the Nexus 1000V will be used. Since this is my first customer case with this particular technology I am trying to establish some basic best practice guidelines for myself.
My primary concern at this point is wether or not to rely entirely on the Nexus dvSwitch for all traffic including management. This hinges largely on wether it is possible to fix any problems using the ESX command line exclusively, if the situation should arise that an invalid configuration of the VSM or VEM destroys communications between these layers. This can quickly turn into a chicken-and-egg problem - if the VSM is unable to configure the VEM, the VEM does not work, and until the VEM works the VSM cannot talk to it.
We will be using a cluster of two physical VSMs, so I do not expect the VSM to be entirely unavailable - but misconfigurations may happen, that's just a fact of life.
Since this is my first real world implementation I am curious what other people have seen - and wether or not it may be a good idea to build a regular vSwitch into each ESX server with a service console connection on it that will always work regardless of the state of the VEM.
Once you Nexus 1000v is configured the VEMs will continue to work even if the VSM i down - the way I describe it is similar ro vCenter and HA in that HA will still function even if vCenter is down - with that being said I have still implemented Nexus mainataining some non-distributed switching -
If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful