i had to vacate a blade chassis for repair this weekend, the predeployment tasks included vacating all hosts in that chassis so it could be powered down.. upon doing this I vmotioned all vms off of these hosts onto others... All hosts are in a drs/ha cluster with distributed network switches. Upon vmotioning, five vm's didnt have network connectivity afterwards. I had to go into their settings, disconnect the virtual network, and then reconnect it in order to get it to ping and server packets. Has anyone else seen this, any ideas as to what might have caused this. i am performing an RCA and just wanted to see if other companies had experienced this issue and what might be a fix. We are running esx1 4.0 on HP Blade systems. thanks a bunch,
turns out to be an issue with the Nexus SV version and VMware.. Here is the posted solution from support
This issue is mostly a result of multiple fail overs between VSM's in an HA configuration.
To restore connectivity issue perform the following..
1. Login to the ESX host.
2. Restart the DPA with the following command.
This issue has been fixed in the following version..
* Cisco Nexus 1000v 4.0(4)SV1(3)
So after issuing the above commands, must upgrade our Nexus platform to version 1000v 4.0(4)SV1(3)