Say my management network for ESXi fails due to bad network card, so in which case I cannot connect directly to that host anymore.
I do understand that the VM's on that host will still be working just fine even though my host shows in disconnected state.
Now that my host is in disconnected state & I need to power off the host to replace the network card on the host, how do I perform vMotion for all the machines running on it to shutdown the host & replace the network card?
Hi,
If you have access to ESXi shell via KVM or iLO or ..., I think, you should move your management network to the virtual machine vSwitch and enable vMotion on that via shell commands (esxcli or localcli).
Then migrate all your VM to another host.
I've a separate vMotion vswitch from my management network & my vMotion network is up & running. I also have access to host via ILO.
Now that my vMotion network is up & have access to ILO, how do I initiate the vmotion of all machines to other host?
You can choose to add a vmkernel port for management on vMotion switch and then add host vCenter with new address and migrate the VM's( this will need your uplink configured for vMotion to be able to send of the new network created).
http://kb.vmware.com/kb/1006989
The vm's if configured to be running on different network will continue to be up while you are working to fix the management network, any reason why you want to migrate VM's to other host?
Sorry for misunderstanding.
Fine, so you can release the VMKernel port-group (Your mgmt port-group that it using for vCenter and host communications) IP address and add new mgmt port-group on other vSwitch, assign your mgmt IP address (Of curse, if you have no conflict or any issue on your physical switch and network), then the host will be reconnected to vCenter.
Another option, you can power-off virtual machines and un-register them from the host and register them on other hosts. It's possible via command-line: vim-cmd
You can use "soft" option for shutdown VMs via "esxcli vm process kill" command.
Is the host already in this state or we are trying to decide on designing the infrastructure for redundancy?
I don't want to power off the VM's, that's the reason I need them vMotioned.
So now what are the commands to release an existing vmkernel port group on management switch & creating a new management vmkernel port-group on an existing Virtual Machine vSwitch from ESXi 5.5 console ?
I'm just taking a scenario, otherwise everyone keeps a redundant connections to different switches for management network, it's just sometimes both the server ports for management are on the same network card as you don't want to waste 10G ports for management network.
The way you could avoid wasting 10G cards would be to set both cards to active/active on a vswitch, while setting up VMKernels for each service, Management, IP Storage, and vMotion. If you have the licensing you can use NetworkIO control to ensure resources for ingress and egress traffic. However, if you do not have NOIC you can set outbound (egress) traffic shaping policies to limit the flow of IO through the cards.
We can enable management traffic on the vMotion network as well. When creating the vmkernel port we can enable both vMotion and management traffic so that if the primary management is down we can still connect to host via vMotion and management enabled port.
This would only help if the vMotion is using a different uplink than the one which has failed.
Regards
Sharath