Hi,
My scenario is as follows
Two hosts in a DRS/HA cluster
My guest has a NIC connected to the network and another connected to a virtual switch with no outgoing adapter
I have created the virtual intranet identically on both hosts.
When I try to vmotion this to other host I get error
Unable to migrate from host1 to host2: Currently connected to network interface 'Network Adapter 2' uses network 'FW Dev OUT', which is a virtual intranet.
Is this normal behaviour? And would I have the same issue if a host fails or would it be able to boot vm on other host?
To trigger a VMotion between 2 Hosts the Network configuration affected (every virtual Switch the Host is connected to) must be identical, including VSwitch names, Address Ranges etc. Did you check that is is absolutly identical? When cold migrating the Host you possbly have to reconfigure the VNic and connect it to a VSwitch if names etc. differ.
Regards,
Gerrit
Seems to be identical.
The virtual switch label for virtual intranet and network is identical as well as the vswitch numbers
I have no issue with migrating machines that don't have a second adapter connected to a virtual network
When I do a cold migration I get same error but in the form of a warning this time but the migration succeeds
and there is no need for me to reconfigure adapters. The adapters get added to expected vswitch
This is normal behaviour, but can be set to allow a VMotion even if the VM is connected to an internal switch.
Edit the .vpx file of the VM that is connected to the internal switch and add the following at the end of the config (But before the last close):
<migrate>
<test>
<CompatibleNetworks>
<VMOnVirtualIntranet>false</VMOnVirtualIntranet>
</CompatibleNetworks>
</test>
</migrate>
Should VMotion just fine after that.
Sorry to be ignorant but where can one find the .vpx file?
It does not seem to be stored with the other files
cheers
My apologies, that should have read .vMx
Thanks Patrick
Maybe another silly question but vmx is a text file.
Your example is more xml like? How does that work?
would it be the vmxf file by any chance?
Not a silly question at all...as soon as I posted that I realized my error. My notes on this were WAY off, sorry about that.
What you need to do is edit the vpxd.cfg file on your VC Server, and add the lines I posted above. Sorry about the confusion
No worries, I appreciate your help.
VpxClient.exe.configby any chance?
Content looks like this:
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<system.net>
<connectionManagement>
<clear/>
<add address="*" maxconnection="8" />
</connectionManagement>
</system.net>
<appSettings>
<add key = "protocolports" value = "https:443"/>
</appSettings>
</configuration>
Nope...it's definitely vpxd.cfg. You should be able to find it at C:\Documents and Settings\All Users\Application Data\VMware\VMware VirtualCenter\vpxd.cfg (default path).
Yes this is normal behavior - I did not read through all posts but I am sure someone has already indicated you can not VMotioin a VM that is connected to an Internal Only Switch - a vswitch that is not connected to any physical adapter - the reason for this is if your VM is communicating to another VM only through this internal only switch when VMotioning the communication is broken potentially causing your VM to crash - so we err on the side of caution and the validation failes with the error you state
The reason why you can boot on the other host is because all the vswitches have the same names and the VM is powering on where during VMotion it is already running -
I will give Patricks suggestion a go tomorrow. Is it actually a supported solution or not recommended by vmware (as they do with cpu masking)?
Like most work arounds, it is not officially supported...but seems to be the only solution.
Typically, the most elegant way of solving this (without hacking configs or adding new hardware), is to simply enable the network ports for 802.1Q VLAN tagging, adding another unrouted VLAN on top of the existing ports, and having your ESX servers use that for a testing environment.
That way you still have an isolated test environment, but VMs in the test environment can actually communicate with each other while on different hosts. You also don't run into the whole issue of having to override the safety checks.
do i need to restart anything after i made the changes.
i still get the sam error