Greetings,
In my testing of various set-ups with distributed virtual switch I've removed my dvSwitch from one of my hosts using the esxcfg-vswitch and esxcfg-vmknic commands.
When I check Host > Configuration > Distributed Virtual Switch there is no dvSwitch left, nor is there with esxcfg-vswitch -l. esxcfg-vmknic -l also doesn't show any vmknic's listed/connected to a dvSwitch of any kind.
However, every 3-5 minutes I get this error on the host from which the dvSwitch is removed:
Name | Target | Status | Initiated by | Requested Start Time | Start Time | Completed Time |
Update distributed virtual port | The object or item referred to could not be found. | vpxuser | 23/06/2009 10:45:25 | 23/06/2009 10:45:25 | 23/06/2009 10:45:25 | |
Create vNetwork Distributed Switch | Error during the configuration of the host: SysinfoException: Node (VSI_NODE_net_pNics_link) ; Status(bad0004)= Busy; Message= Instance(1): vmnic0 Input(3) 9e 49 32 50 0f 35 82 90-bd e3 25 c2 7b 3f 94 52 130 1705460805 | vpxuser | 23/06/2009 10:45:25 | 23/06/2009 10:45:25 | 23/06/2009 10:45:25 |
When I check on vCenter Home > Inventory > Networking > dvSwitch > Hosts I still see the host with DVS Status "Down" connected to the dvSwitch. When I then try to remove the dvSwitch I get the following error:
The resource vim.dvs.DistributedVirtualPort 101 is in use.
DVS dvSwitch port 101 is reserved by to entitiy hostesx4i.domain.root vnic vmk0, type:hostVmkVnic
When trying to delete the vmknic vmk0 from port 101 on dvSwitch using esxcfg-vmknic commands I get the error that dvSwitch doesn't exist, so it can't delete anything.
Something that might be related:
When trying to migrate a virtual machine from host1(the host that has the dvSwitch problem) to host2 validation succeeds but at around 78% I get this error: "A general system error occurred: Source detected that destination failed to resume." No other errors or notifications are shown when this happens.
Thanks in advance for any help you might be able to give.
Bram
I ended up resetting my esxi system configuration, it was obviously something to do with HA but I couldn't locate the exact problem. I could afford to reset it to system configuration because it's just a test system but I can imagine this isn't a solution to alot of people, eventhough I could easely import the VMs to my inventory from my datastore.
NFS for my testenvironment, once the real hardware arrives we'll use SAN.
I had the same issue with NFS and 78%. It turned out to be a mis-configured path which was different between the two ESX hosts that is why it was failing.
Maish
Virtualization Architect & Systems Administrator