I have 3 ESXi servers with similar configurations. The funny thing is this:
1. Server B can ping vMotion IP of Server A and C, but Server A and C cannot ping vMotion IP of Server B. Thus, vMotion between Server B and the rest is out.
2. While Server A and C can ping one another vMotion IP, VM can only vMotion 1 way from Server C to Server A, but not the other way round.
We do not use any firewall nor VLAN in the iSCSI network that we place our vMotion network in. Configuration:
Server A: 192.168.190.200 (see attached diagram)
vMotion: 192.168.5.40
iSCSI: 192.168.5.41/2
Server B: 192.168.190.201
vMotion: 192.168.5.50
iSCSI: 192.168.5.51/2
Server C: 192.168.190.202
vMotion: 192.168.5.60
iSCSI: 192.168.5.61/2
Hi,
see following option
1) Dose all EXS host is visible to all the data stores ie All data stores must be visible to all the host
2) Try to migrate the one vm machine and select the host you want to migrate and check do you get valid success try to perform in both ways and share the screen shot
Hi,
Could you please post screenshots of the network configuration for all the servers?
When you are pinging between the servers, are you using the vmkping command?
Hi,
Kindly check the following points before vmotion
1) ESX Servers must have consisten Networks and NEtwroks labels.
2) All the LUN should be shared to all the ESX Host
3) ESX Servers must be configured with VMkenerl ports enabled for vmotion and on the same network segment
Hi pavi,
1. Yes, all the iSCSI data stores are visible to all the 3 hosts.
2. vMotion a VM from Server C to A works, but A to C doesn't. Nothing works with Server B.
Hi 4nd7,
Yes, I'm using vmkping. Attached are all the screenshots as requested.
Note that I have removed the vSwitch2 delicated for vMotion and merge it into vSwitch1. All 3 hosts have the same setup, so it's really puzzling why it works on one but not the other.
Hi Umesh,
1. All network labels are the same. vMotion is used for all 3 hosts.
2. All LUNs are shared and connected to all 3 hosts
3. vMotion is enabled on the vMotion VMKernerl and are on the same network segment as the iSCSI network. Previously I use 192.168.6.x for vMotion and it can't be ping at all. After amending the vMotion IP to .5 subnet, they can now be pingable except for Server B. I'm thinking of restarting the management network to get the new IP address refreshed, however, I'm concern with the warning that this may cause production VMs to lose connectivity.
Below are some error messages from vCenter. The 2nd and 3rd error message is on the test VM which I believe should not affect vMotion functionality:
Warning message on victor34: Insufficient video
RAM. The maximum resolution of the virtual
machine will be limited to 1176x885. To use the
configured maximum resolution of 2560x1600,
increase the amount of video RAM allocated to
this virtual machine by setting
svga.vramSize="16384000" in the virtual machine
's configuration file.
warning
4/5/2012 11:53:37 AM
victor34
vpxuser
Message on victor34: The guest operating system
is Windows XP and one or more virtual SCSI
devices are installed in the virtual machine.
Windows XP does not support the BusLogic SCSI
adapter that VMware ESX currently uses for
virtual SCSI devices. Install the VMware driver in
the virtual machine. Download the driver from
"http://vmware.com/info?id=43". Click OK to
continue or Cancel to cancel.
info
4/5/2012 11:53:37 AM
victor34
vpxuser
Hi bhwong7,
Could you please post the output of esxcfg-route -l from all 3 servers?
Here's the route:
From 192.168.190.200
vMotion between Server A and C works both ways now. Previously it can only vMotion from Server C to A but not A to C. However, when I power up this VM from Server A, it is able to vMotion to Server C without any problem. The only thing I have done is upgrade it's vmtools as Server C is using newer ESXi version (build 502767 vs 433742)
Are the CPUs in your servers all the same?
No, but they are on HA EVC Mode which disabled the features of the newer CPUs to match the older CPUs. Thus, this shouldn't be a problem. There is no warning of incompatibility between the hosts as well. It's vMotion network issue and I have no idea what is wrong with it.
Also the test VM can now vMotion successfully between Server A and C repeatedly. So CPU should not be an issue at all.
Is there anyway to refresh the vMotion IP address without restarting the management network on Server B? Is there a way to broadcast the new IP addresson the network too?
Could you please post all the esxcfg-route -l output from all the hosts?
Also, you could change the patch cord that connects the vmnic used for vmotion on server B, and/or use another switchport.
I have already post the route details. It's the same for all 3 hosts:
VMkernel Routes:
The hosts are located in the data center. I will try patching next week, maybe add a NIC teaming for vMotion?
What do you actually mean by teaming?
I would not let vMotion traffic share the same links as iSCSI. Storage traffic should have the lowest latency, and vMotion traffic can affect that if it gets on the same vmnic due tue a failover event.
I mean adding a 6th NIC so that 2 NICs are active for vMotion usage only.
Here's the error message when I vMotion a VM from Server B to C:
Could you post the log file for the vm that you are trying to migrate and fails?
Hi,
I switched the port on the switch and vMotion now work perfectly! The original port was configured as an uplink port. Make me look really silly right?
I just wanted to confirm if your ESX are all of same version. i.e 5.0 or 4.X. The reason I am asking is that when I look at the logs, I wanted to confirm if you have enabled any 5.0 feature on a VM that is not supported on 4.X ESX, and hence the Vmotion might fail, if you are migrating from 5.0 to 4.X with a feature of 5.0 in a VM.
HTH,
zXi
Hi zXi,
I do not have any 5.0 host yet. So this is really unlikely. But thanks for reminding me this potential issue if I am not upgrade some to 5.0 and some still on 4.1.
Boon Hong.
>Hi,
>I switched the port on the switch and vMotion now work perfectly! The original port was configured as an uplink port. Make me look really silly right?
So can I assume that your problem has been solved after the above step
HTH,
zXi
Yes. It's port issue. :smileysilly: