If you have been following my posts (which I would guess no one has ), you might know that I'm trying to set up an isolated lab using ESX and VirtualCenter. Essentially, templates are cloned and the results are put into isolated vSwitches with no NIC uplink. That way we could clone 5 Windows VMs, install a virus on one, watch it propagate between all 5, and not worry about it getting out.
The problem is with IPs. We need to quickly set up and destroy these multi-machine configurations, including IP settings. Lab Manager is way out of our price range, so we're writing out own custom software using the VI SDK API. We also want to be able to use Linux templates such as BackTrack, Ubuntu, etc... which are not supported by the template customization. We tried blowing away the pre-build perl scripts for VirtualCenter and writing our own, but we're still not able to reliably customize Linux templates.
So, we have figured out we have two choices on how to proceed:
1. Have a DHCP server hand out IPs
2. Use legacy networks (e.g. NATed networks) on ESX
Now, these separate experiments need to be isolated. Those 5 windows machines I described above shouldn't be able to talk to a couple Ubuntu machines we're running separately. So, we thought we might use VLANs to separate them. Essentially each experiment would get a port group and unique VLAN on a vSwitch, and a DHCP server would sit on a port group with VLAN 4095 (VLAN ALL) and hand out IPs. I tried setting this up by having 3 BackTrack machines, one on VLAN 1, one on VLAN 2, and on one VLAN 4095 (all on the same vSwitch). None of them could talk to one another. Do I need to install a separate network driver that can handle tagged traffic for a Linux box?
Or, should I use Legacy networks? I noticed that when I added a network card to a VM, I had the choice to give it a standard port group backing, or a legacy network backing (although there were no legacy networks). Are they being phased out (hence the legacy term)? How do I create them? Is it possible to create them via the VI SDK?
If I can get either of these working (legacy networks sounds a little "cleaner"), then I can have a nice functioning lab. Can anyone help me?
I was running a vm under vmware-server with the client vm doing the 802.1q vlan tagging. In that environment I had to set the MTU on the host and the client to 1518 for it to work correctly. I have now attempted to move this application to esxi 3.5 using vlan 4095. I have managed to get the MTU changed on the nic using the CLI tools of esxcfg-vswitch, but the client OS (linux) doesn't seem to see that it is able to increase the MTU yet.
The cli commands that I used were:
/sbin/esxcfg-vswitch -m 1518 vSwitch1
/sbin/esxcfg-vswitch -A 'Net 1Q' vSwitch1
/sbin/esxcfg-vswitch -v -p 'Net 1Q' vSwitch1
/sbin/esxcfg-vswitch -m 1518 -p 'Net 1Q' vSwitch1
This gave me a vm server level mtu.
The next problem that I had was the on vmware-server I was using e1000 ethernet devices/drivers in the clients. It seems that the vmware-converter removed all of my e1000 devices and replaced them with pcnet32 devices. To fix this I added a line as follows to my vm.vmx configuration file:
ethernet0.virtualDev = "e1000"
Now my linux host can set the mtu to 1518 and do it's own 802.1q decoding/encoding.
If you're running ( or can upgrade to ) vCenter 4, some of that lab manager functionality is built in. Select a datacenter, then select the 'ip pools' tab, you can use that like DHCP to assign ip addresses to a specific virtual network and vms.