That is a very nice change! Can you give us the
specifics? What subnet to what subnet? Where were the
netmasks involved? When you setup the default gateway
for each side, how were they setup?
Host 1 vmotion subnet 192.168.110.0/24
Host 2 vmotion subnet 172.16.0.0/24
How far apart where the subnets?
About 3/4" of an inch.
To make things a little more interesting, my two IP routers are both VMs running on the ESX hosts. So the VMotion traffic was actually flowing through a VM running router software to forward the packets to the other ESX host.
Even so, I am not sure I would do this on anything
that was not a dedicated link. I can just see people
trying to vMotion over a non-dedicated link. Remember
vMotion is unencrypted so the memory footprint is
available to a man in the middle attack.
That makes two of us
One of the goofy things I found out while changing the VMKernel IP, SM, and DG, you have to do it in two steps. First change the IP and SM. Apply those changes. Then it will allow you to change the DG. If you try to change the IP, SM, and DG all in one step, the DG config will fail because the IP and SM hasn't been applied and it will sqwak about the DG not being on the same subnet as the VMKernel IP so it won't allow the change to take place. Trademark VMware quirks. You gotta dig to find 'em, but they are there.
Thanks for taking the time to test this out for me. I
really appreciate your efforts and your willingness
to share the results!
Thanks Ken. That's one of the harder 10 points I've worked for.
Word to the wise if anyone tries changing their VMKernel subnet and they are using swISCSI, you'll lose all access to your ISCSI storage immediately when you make the change since you're breaking the VMware rule that says the ISCSI (VMKernel) port must be on the same subnet as the COS. Doesn't matter if you're using CHAP authentication or not. During my tests, I had about 10 VMs lose their ISCSI storage which killed the VMs. Luckily when my testing was done after I put everything back in to place the VMs booted back up and I haven't seen any ill results. These weren't PROD or DEV VMs at work. These were PROD and DEV VMs at home.
Our company provides a network appliance that enables server and storage extensions over any type of wide area network, including IP networks. The product enables a virtual wire over which any protocol can be extended over any network. We recently deployed a solution with a customer to enable live server migrations over a public IP WAN using VMotion. Our virtual also enables jumbo frame MTU transparency, bulk data encryption, lossless data compression (achieving 5x compression for VMs), and wide area packet loss immunity. Further information is available from our website.