I first want to say that I'm new to all of this stuff, so sorry in advance for my stupidity. We are going through the process of adding a colo site to our company. Right now we have a 50 meg fiber connection between our main office and the colo. We eventually want to have esx servers at the main office and the colo site. I was under the impression that vmotion could only happen in the same subnet and therefore we configured the colo site to be in the same subnet as our main office. Then I came across this thread:
This changed my thoughts on vmotion. The problem is that vmotion was the only thing holding us back from creating a new subnet on the colo and routing traffic between it instead of just switching it. My question is: how does one normally set up an esx cluster when there are nodes at a different site and can they vmotion between each other if they are in a different subnet? And if you can, I'm a little confused as to how it would work. It's not the storage I'm worried about, we already have san replication happening between the main office and the colo. I'm more concerned about how clients will be contacting the server after it has been vmotioned. Let's say I have two subnets: 192.168.1.0/24 (main office) and 192.168.2.0/24 (colo). I have a vm webserver with an address of 192.168.1.80 running on an esx server at the main office. When I vmotion it to another esx server at the colo site it still has the address of 192.168.1.80 but is living in the 192.168.2.0/24 subnet. How are the clients still going to be able to connect to it? I hope I can do this in some way because the routing vs switching benefits alone between the main office and colo are worth it.
There's two different networks. Vmotion has it's own network. I think for Vmotion to work, this part has to be on the same subnet. I'm not sure if 50Meg is enough bandwidth though. What does Vmware recommend? I know that they usually tell you not to Vmotion outside the datacenter.
For the VM's LAN perspective, you can do it exactly the same way you would if it were a physical machine. If you picked up a physical machine in building A and brought it to building B, would it be able to communicate on the network? If not, then the VM won't work.
Now storage. Does the ESX host in both buildings see the same storage or, does the host in the other building only see the replica? To Vmotion, they need to see the same storage. If both hosts see the storage in building A, vmotion will work but, that's not a solution if building A goes away. If they are pointing to seperate replicated storage then vmotion won't work but, if building A goes away, you are protected. There would be a lot of manual stuff to do but, you could get things going.
If you're interested, Equallogic is having a web demo on this subject in about 30 minutes.
As far as I know VMware say that 1GB Ethernet is required for Vmotion, so I don't think that 50MB over WAN will be your answer.
Regarding the network subnets, if you Vmotion server between two subnets you should change its IP address.
You can consider this,
If you have storage sync between the sites (something like Netapp SnapMirror ) and you don't have to perform live Vmotion as the VM is started, maybe you can consider shutting down the VM perform a sync between the two storage systems and boot the machine on the target site...
I think 1GB is recommended but, I'm willing to bet 50MB might work to some degree. I would not base your whole system design on this post though. Is there a way for you to test it? I've had my Vmotion network going through a 100MB switch for a while and it seemed to work fine.
Thanks guys! I guess the real problem isn't really vmotion but a networking issue. You said if you can't pick the box up and move it to the new location and plug it in and have it work...then it won't work period (with vmotion or just vm machines for that matter). That makes sense. This is doable with some servers because upon moving them we can just change the ip and dns info and be golden. We have some applications (like our accounting system) that won't work though. The clients have a hard-coded ip address and it's running on over 100 workstations around the country, so the admin overhead would be a nightmare. It wouldn't be impossible, but I think it would just be easier to keep our colo site bridged to our existing network instead of routing to it, then we could actually take down virtual machines here and pop them up at the colo and continue running w/o too much admin. Does anyone forsee any problems running like that? Does anyonme else bridge to their colo site instead of routing to it?
Are you planing to vmotion over the 50MB WAN?
If so I think that I won't count on it to work well, and won't try this on production environment.
As my friend ejward says vmotion also may work in 100MB Ethernet, and it did so for me too but it was in LAN and not WAN...
If you can build a test environment for this it would be the best.