Hi,
I just wanted to post an issue I stumbled over when trying out things in a lab environment which I set up in preparation to my VPC510 exam. Maybe someone can verify / falsify this issue. A statement from vmware-developers why this is happening would also be great. Here is the scenario:
esx5 host (real machine) as a lab environment.
On that bare metal are running two virtual esx5 hosts and a vCenter Server environment. The virtual hosts NICs share a seperate port group on the real host.
The following picture shows the vSphere Client connected the the vCenter Server:
(Btw. the warnings arise from SSH beeing enabled on the hosts)
You can see the management network and the vMotion network separated with individual IP's. The properties are set accordingly (vMotion enabled for the vMotion network).
Before separating the VMKernel Ports I tried vMotion and it worked (with the appropriate properties set).
This scenario is very common I guess and it worked for me on real networks. In the described virtual host scenario vMotion will not work. Please see vmkping failure from host2 (esx2):
(If you 'self'-ping from host1 to 10.0.0.11 you get response)
I know this may be an odd scenario but for me it's the only way to play around with things. It seems as the virtual NIC's don't support the vMotion mechanism.
I just want to know if this is a wanted behaviour, a bug, or am I missing something.
I had an issue like this some time ago and I think it was solved by either setting "Promiscuous Mode" for the vSwitch on the physical host to "Accept" and/or by attaching the vSwitch to a physical network!?
André
Welcome to the Community -I am not sure what the issue is unless when you say you seperate the vmkernel ports you are implying that vmotion stopped once you moved the vmkernel ports to seperate vswitches or the virtual esxi hosts to seperate vstiches? Are you able to vmkping after the reconfiguration?
How is the networking set up in the physical host? do you have both virtual hosts connected to an internal only switch?
But in any case using virtual nics should not be an issue
Thank you for the quick reply.
- vMotion stopped working once I moved vMotion to a separate VMKernel port group (it's still on the same switch). But I also tried a different switch with same result. (Btw. restarting the host makes no difference)
- vmkping to vMotion IP possible from the host itself but not responding from the 'outside'. Thus vMotion not working.
- both virtual hosts are on an internal only switch
With your setup you should probably use separate subnets for Management and vMotion. I'd suggest you try 10.0.0.10 (Mangement), 10.0.1.10 (vMotion) for ESXi host 1 and 10.0.0.20 (Mangement), 10.0.1.20 (vMotion) for ESXi host 2 (assuming the subnet mask is 255.255.255.0).
André
Thank you André,
after you mentioned it I tried the subnet-approach a second time. Even on a separate switch vmkping is not echoing on the vMotion IP.
Sorry, the idea was good.
I know all of you have a full schedule but can someone verify or falsify such a behaviour?
So I'm trying to undstand this
You have one physical Nic for the 2 virtual hosts, correct?
I usually setup a separate virtual switch for vmotion but it can connect to the same physical nic
Are you trying or doing any network trunking?
To see how we can help you, please provide some information (as detailed as possible) about your current setup. Starting with the virtual network configuration of the host (Btw. did you configure promiscuous mode for the vSwitches on the host?) How are the virtual ESXi hosts configured? Please provide screen shots of the virtual switches including the IP configuration for the VMKernel ports for both virtual hosts.
André
The problematic setup is all virtual.
The two virtual hosts are VM's on one physical host and are linked via a vSwitch (one port group, no physical uplinks) on this physical host.
I tried vMotion separation on the physical host, there it works like expected. This is where I came to the theory that there has to be an issue with the virtual stuff.
Since this is not a normal setup maybe a vSwitch isn't capable of transfering vMotion traffic.
In advance, thanks for the help. Here are a load of screenshots:
1. The vSwitch of the physical Host:
2. The Properties for it:
3. Now diving in to the virtual side. Here is the switch from virtual host1:
vmnic0 is a virtual E1000 Adapter virtually attached to host1
4. Here are the properties of the switch:
I had an issue like this some time ago and I think it was solved by either setting "Promiscuous Mode" for the vSwitch on the physical host to "Accept" and/or by attaching the vSwitch to a physical network!?
André
I think you have "to have" promiscuous mode turned ON even with the physical network connection.
When the host are virtual you have to PM to help with traffic. It's like snooping your own Mac/traffic
Turn PM on and let us know what happens...
Accepting Promiscuios Mode in the underlying vSwitch on the physical host did the trick.
Thank you all for getting involved.
Cheers