Ghell
Contributor
Contributor

Completely virtual network

Jump to solution

I currently have a single public facing Linux guest on ESX 4. I would like to split it into several guests that have their own network that is completely internal to the ESX server. For example, taking the mysql off the public facing guest and giving it its own virtual machine that can be accessed by the existing guest over a virtual NIC.

I did this years ago on VMWare Server 1.0 by using a "Custom" network interface (rather than NAT, Bridged or Host only), such as VMnet4. I can't work out how to do it on ESX 4 though because as far as I can tell, the interfaces are all similar to what "Bridged" was in Server 1.0.

How can I do this so that several virtual interfaces can talk to each other but under no circumstances to the outside world?

The reason I am so insistant on them not being able to talk to the outside world is that I once got my contract terminated from a data center because a virtual network such as this had a DHCP server on it and it was somehow affecting the rest of the data center, causing problems with physical machines that didn't belong to me. I never worked out how that happened but I don't want to run that risk again (I'm not planning on using DHCP for this virtual network anyway but better safe than sorry).

0 Kudos
1 Solution

Accepted Solutions
jesse_gardner
Enthusiast
Enthusiast

I hope I don't over-simplify this for you. Network traffic will stay internal if the vSwitch that the VMs are connected to doesn't have any physical (uplink) NICs connected to it.

Create a virtual switch with no pNICs attached, create a second vNIC in each VM attached to that switch. Of course, you'll have to make sure the IP address scheme works out.. This is assuming you don't need routing (which you shouldn't, this should be a simple setup)

View solution in original post

0 Kudos
3 Replies
jesse_gardner
Enthusiast
Enthusiast

I hope I don't over-simplify this for you. Network traffic will stay internal if the vSwitch that the VMs are connected to doesn't have any physical (uplink) NICs connected to it.

Create a virtual switch with no pNICs attached, create a second vNIC in each VM attached to that switch. Of course, you'll have to make sure the IP address scheme works out.. This is assuming you don't need routing (which you shouldn't, this should be a simple setup)

View solution in original post

0 Kudos
jbogardus
Hot Shot
Hot Shot

If you also have your vSphere environment licensed to use the Virtual Distributed Switch you can set it up in a similar way as Jesse mentioned but have the virtual switch shared between multiple hosts. This way VMs running on separate hosts can communicate with each other using the distributed switch, but not with other physical servers outside the VMware environment. I you use a standard vSwitch not connected to a physical NIC, the VMs can only communicate with other VMs on the same host, not other VMs on other hosts.

Ghell
Contributor
Contributor

Thanks, I stumbled on the vSwitches while I was waiting for replies (and also an explanation that what was NAT is done in the same way but with a VM handling it instead of the host, which is what I was doing years ago).

It's all working now so I'm just waiting for confirmation from the data center that it's not messing anything up on their end.

0 Kudos