vSphere vNetwork

 View Only
  • 1.  VM port state shows blocked

    Posted Nov 18, 2013 07:42 PM

    Hi

    I have host connected with nexus 1000v switch .Its newly configured hosts ,when i migrate VM to this host i am not able ping host via IP or host name. when i checked networking vLan port status shows blocked .But i can see other VM's running on same vLan .what might be the cause and how to fix .



  • 2.  RE: VM port state shows blocked

    Posted Nov 22, 2013 04:28 PM

    I have experienced this before.  Keep in mind that there are a couple of ‘port disabled’ messages that could be observed.  For example, if a VM is powered off intentionally by a vSphere admin (i.e. for decom, etc.) the 1000v logs this as ‘port disabled’ which will freak out some network admins that see this via SNMP alerts.  However, there is also a ‘real’ issue in which a powered on VM stops pinging (i.e. after vMotion to a bad VMHost) and displays a “port disabled by admin” message.  The latter issue is what we will discuss below.


    With 1000v, typically the first thing to try when faced with VM connectivity quirks is to try the toggle fix:

    Quick Toggle Fix:

    a) Take note of the VM’s current vNIC port group (from Edit Settings on the VM)

    b) Set the VM vNIC to the “Unused / Quarantined” portgroup (from Edit Settings on the VM);

    c) Click OK;

    d) Set the VM vNIC back to it’s original port group noted in Step A above.

    Note:  By using the above fix, often you can get back in business and your VM starts pinging again. However, this may only be a temporary fix.  The problem could come back when the VM vMotions to a bad host again.

    Port Channel Review:

    If all the hosts in the cluster are configured to use the 1000v and you observe this “port disabled” scenario after vMotion to other hosts, then this is typically indicative of an upstream network configuration problem; specifically I have seen this in scenarios where there were inconsistent port channel configurations across hosts.  If this is the case, generate a CDP report to show the network guys exactly which ports are in scope for review.  Then have them confirm the health of the interfaces and port channel configs.

    CDP Report (best served up with PowerCLI):
    http://www.virtu-al.net/2008/12/12/detailed-vmware-host-network-information/

    Note:  More info about CDP available at VMware KB1007069

    Check VSM / VEM Health:

    Review the output of the following commands on each host and compare them for consistency (performed via ssh to the ESXi hosts):

    vemcmd show version

    vemcmd show card

    Optional - Gather Logs:

    vm-support

    vem-support -t /var/tmp/ all

    Note:  Use WinSCP in SCP mode to grab the above support bundles from /var/tmp on each host.

    Tip:  It’s often beneficial to have ’n’ number of test VMs available, where ’n’ is the total number of physical uplinks participating in the VM Network port group in question.  So for example, if your VM port group uses 4 physical uplinks, create 4 test VMs and configure them with available IP addresses on the VLAN in question.  Next vMotion all 4 VMs to the host in question.  If only one of the 4 ports is misconfigured, this ensures that it’s identified (i.e. one of the VMs stopped pinging).