VMware Cloud Community
HarisB
Contributor
Contributor
Jump to solution

How to configure VMotion through crossover cable ?

Hi all,

I need to configure VMotion over crossover, here is the situation:

2 identical 1U servers, each with it's own local disk array, I'm interested only in cold VMotion here:

\- vmnic0 connected to vswitch0 and used for console and virtual machines. These are connected from both servers to a physical 100Mbps switch.

\- vmnic1 connected to vswitch1, and vmkernel configured on this switch. These are connected by crossover cable, server to server.

\- IPs given to all 4, same subnet (192.168.0.x/24)

\- vmnic1s are shown in VC to be live and connected at 1Gbps.

From above I would expect VMotion to take place over vmnic1s, however when I do VMotion my network traffic on vmnic1 is 0 both TX and RX, and traffic flows over vmnic0s. I have also configured /etc/hosts to include the other server's VMkernel IP.

What I don't know is this: When VC performs VMotion, is it instructing Server X to contact Server Y and do transfer by any means possible, or is it specifying over which network / adapter VMotion is to be done?

What am I missing in this setup?

Thanks

0 Kudos
1 Solution

Accepted Solutions
christianZ
Champion
Champion
Jump to solution

I would say when you makes cold migration the vmdk

files would be moved over the normal console nics -

therefore you can't see any activities over nic1 -

just a thought.

Verified and tested - that's correct.

View solution in original post

0 Kudos
12 Replies
Dave_Mishchenko
Immortal
Immortal
Jump to solution

Bring up the properties for vswitch1 and check to see if vmotion is enabled for the vmkernel.

You'll also want to use a seperate subnet for vswitch1 on both servers as well as to specify a default gateway that vswitch1 can reach. It doesn't have to be a real gateway, but it should be the same IP for both servers.

0 Kudos
HarisB
Contributor
Contributor
Jump to solution

VMotion enabled on vmkernel on both servers

vmkernel moved to 192.168.1.x/24, with gateways being the other server

Same thing, no traffic on vmnic1

0 Kudos
DFATAnt
Enthusiast
Enthusiast
Jump to solution

VMotion is only used when you have shared storage, so you shouldn't need to use VMotion to do a cold migration. If the VM guest is shutdown,you should be able to migrate the guest from one ESX to another (the process will copy the guests files from the local storage of the first ESX to the local storage of the second ESX).

The cold migration can be done using existing network configurations (given that both ESX servers can see each other on the network).

Ant

0 Kudos
dheerajms
Enthusiast
Enthusiast
Jump to solution

Can you post the output of 'esxcfg-vmknic -l'?

If you do vmkping , the other host replies back?

Maybe you can try keeping VMotion NIC IP in a different subnet like 10.10.10.1 & 10.10.10.2.

0 Kudos
HarisB
Contributor
Contributor
Jump to solution

\[root@ESX1 root]# esxcfg-vmknic -l

Port Group IP Address Netmask Broadcast MAC Address MTU Enabled

VMotion 192.168.1.210 255.255.255.0 192.168.1.255 00:50:56:61:58:d4 1514 true

\[root@ESX1 root]#

\[root@Server2 root]# esxcfg-vmknic -l

Port Group IP Address Netmask Broadcast MAC Address MTU Enabled

VMotion 192.168.1.211 255.255.255.0 192.168.1.255 00:50:56:67:c4:13 1514 true

\[root@Server2 root]#

vmkping works both ways fine

0 Kudos
dheerajms
Enthusiast
Enthusiast
Jump to solution

Sorry, i actually wanted to see the output of 'esxcfg-vswitch -l'.

I hope you have had a look at 'Migration with VMotion' in vi3_admin_guide.pdf.

VMotion requirement states that it needs Shared Storage as DFATAnt has previously pointed out in this thread. How are you managing it with Local Storage?

0 Kudos
christianZ
Champion
Champion
Jump to solution

I would say when you makes cold migration the vmdk files would be moved over the normal console nics - therefore you can't see any activities over nic1 - just a thought.

In addition when you have 1Gb connection you don't need any crossover cable - just normal 5E cable would be fine.

0 Kudos
christianZ
Champion
Champion
Jump to solution

I would say when you makes cold migration the vmdk

files would be moved over the normal console nics -

therefore you can't see any activities over nic1 -

just a thought.

Verified and tested - that's correct.

0 Kudos
stvkpln
Virtuoso
Virtuoso
Jump to solution

- vmnic0 connected to vswitch0 and used for console

and virtual machines. These are connected from both

servers to a physical 100Mbps switch.

- vmnic1 connected to vswitch1, and vmkernel

configured on this switch. These are connected by

crossover cable, server to server.

- IPs given to all 4, same subnet (192.168.0.x/24)

- vmnic1s are shown in VC to be live and connected at

1Gbps.

I would highly recommend setting the IP's for the vmkernel interfaces to be something other than 192.168.0.0/24.. it's probably getting confused about which interface to use.

-Steve
0 Kudos
brucecmc
Contributor
Contributor
Jump to solution

harrisb,

have you found a resolution for this...

I'm having the exact same issue...

However, I have shared storage on both of my esx servers (both can see each others luns).

I have attempted migration cold only at this point, havent tried the hot migration. I've also attempted the migration using the new vmware converter 3.0 tool...gets to about 97% completion, then dies...

I'm suspecting the VC needs to be active in the GB network as well, but I cant/havent found somebody to confirm this as yet.

any help would be appreciated...

thanks

Bruce

0 Kudos
HarisB
Contributor
Contributor
Jump to solution

Hi,

I accepted suggestion that cold migration - one where vmdk files have to be moved - goes over console NIC/switch and therefore crossover in this case won't help. Your case seems to be different as you use shared storage...

Thanks all for your tips.

0 Kudos
johnurra
Contributor
Contributor
Jump to solution

crx over cable will work between servers on vmotion nic but do not use this configuration at your workplace as there is no failover. discuss with your network folks for rolling out at workplace. it's good for testing vmotion though as your network infrasture is put in for a real vmware production installation.

0 Kudos