VMware Cloud Community
gmitch64
Enthusiast
Enthusiast

vMotion and Network

We currently have a dozen ESXi4.1 hosts, with several guests able to freely vMotion between them in the cluster. One of these guests has a connection to the internet as it provides some outward facing services.

The way we currently have it set up is as follows...

Our internet connection comes in to our firewall.

The firewall splits off and pinholes connections for this guest, and presents it on a physical interface on the firewall.

We have connected this physical interface to a dumb 24 port hub.

Each of the ESXi hosts has a dedicated physical interface which also connects to the dumb hub.

Everything works nicely, because the hub is dumb, and doesn't know that the internet guest has vMotioned and is using a different port on the hub.

Now we're in the process of splititng and tidying our datacenter, and half of the hosts we have won't be able to connect directly to the hub - they will be at the other end of a 10G fiber link (although half will be).

We have a couple of options.

1> Keep it set the way it is. Local hosts would still connect to the hub. Each of the remote hosts would have a dedicated vlan from the remote center the the local one (getting rid of the dedicated internet NIC and using the new 10G connection). Each of those vlans would terminate on a physical port on our local switch stack, and the connect via a crossover cable to the hub, and it should all work as before. I don't really like this option as it seems messy and bletch.

2> Set up a dedicated vlan for the internet connection on the switch stack. The firewall, local hosts and remote hosts would all have a vNIC connected to this vlan. I suspect we'd have to set up a shared mac address and spoof it similar to what we have to do for NLB clusters, but I'm not 100% sure.

Are there any better and cleaner ways to achieve what we want? We went for the physical NIC/hub option to give us some measure of physical security, but we could probably achieve the same result with ACLs. We're a cisco shop in case that makes any difference to the suggestions.

Any thoughts, ideas or suggestions?

G

0 Kudos
6 Replies
mittim12
Immortal
Immortal

You could leave it like it is and use DRS Group manager to isolate it to only the host with physical connections.   This would work well unless you ever plan on having that site completely down and would need to keep that machine up.  

0 Kudos
Ande11
Contributor
Contributor

If you want to make it a bit cleaner I would replace the hub with a switch and create a new VLAN for the internet.  Then create a new vSwitch to handle the traffic on all ESXi hosts that are connected to it (in the future if you want the VLAN to go across the remote link then you can do that but it isn't necessary for this).  This gives the ability for the VMs to vMotion without having to do spoofing.  Connect any internet VMs to the new vSwitch and IP them properly.  Once that is done you shouldn't have any problems during vmotions.  The only caveat is you will be warned each time you try to vMotion to a host that doesn't have that vswitch and you can fix that by setting up rules in the DRS group manager to not allow VMotions to those hosts without the vSwitch.   

0 Kudos
gmitch64
Enthusiast
Enthusiast

> isolate it to only the host with physical connections

Ideally, I'd like to get away from the hosts having physical connections - the ones in the 2nd datacenter certainly won't be able to have a physical connection to the hub, as they are in a different location, with only some fiber between the two centers. We want to have the 2nd center be able to run the guests for failover, load balancing and DR purposes, so restricting the guests I don't think would be a good option (tho admittidely it's not one I really considered before, so I will have a think about it).

G

0 Kudos
gmitch64
Enthusiast
Enthusiast

> create a new VLAN for the internet

This kinda crossed my mind too, but I wasn't sure that it would work seamlessly since when a host vMotions, but thinking about it again, of course it would, otherwise all vMotions would break - I guess I had 'internet' in my head, and was thinking about different tacks for it.

> The only caveat is you will be warned each time you try to vMotion to a host that doesn't have that vswitch

I'd set it up on a dvs, and let all the hosts have access to it. We're going to be stretching our backbone (and cluster) across the 2 datacenters with several vlans anyway, so an additional one won't make any difference.

How secure are vlans I wonder... We kept everything physically seperate for some physical security before, but given that we are going to have to vlan/trunk over from the 2nd datacenter, would vlaning everything open us to any other security issues? I guess we could mitigate some of that by applying ACLs on the vlan and allowing only the firewall and the hosts to have access to it.

G

0 Kudos
mittim12
Immortal
Immortal

Looks like you’re going the VLAN route then ☺

0 Kudos
Ande11
Contributor
Contributor

ACLs should enable you to be as segregated as you need to be.  To make them more segregated you could make that VLAN be a DMZ and only allow certain networks/hosts to talk to certain network/hosts.   

0 Kudos