Part 4: My vCloud Journey Journal - The ProLab Network

Part 4: My vCloud Journey Journal - The ProLab Network

3-layers-vCD-vSphere-Physical-NETWORK.png

As part of my root and branch overhaul of my lab in the colocation I decided to take a knife to my existing vSwitch configuration. Previously, I'd mainly used the Standard vSwitch for all my networking - both VMs and VMkernel. Generally, I just had one Distributed vSwitch for testing purposes. In the past I would often reconfigure the lab - destroying LUNs and vCenters without a moments thought - and sometimes the Distributed vSwitch got in the way. However, I've noticed that during my last two book writing projects (SRM5.0 and View5.1) I very often have the same configuration in place from one quarter to the next. Knowing that many of best features of vCloud Director require the use of Distributed vSwitch made me decide to jump in with both feet and clear out the use of Standard vSwitches altogether. Already I've feeling the benefit.

In the end I opted for two Distributed vSwitches. One for all my "infrastructure" traffic called "Infrastructure DvSwitch" that allows the lab to work (domain controllers, vCenter Server Appliance, remote access, vShield, vCloud Director Appliance and so on) and other that opted for now to call "Virtual DataCenter DvSwitch". I've taken the old-fashioned view that internal "management" traffic that allows the vSphere platform to function should be separated for security and performance on different physical VMnics - away from my ordinary VM traffic.

Screen Shot 2012-09-27 at 14.49.44.png

So I now no longer have any Standard vSwitches at all...except for "vmservice-vshield-pg" which is the internal vSwitch that's used by vShield Endpoint. I first got started with vShield Endpoint in the whilst writing the View5.1 book, and teamed up with BitDefender to use their "Security Virtual Appliance" to offer anti-virus into the guest operating system without the need for an AV-Agent resident.

Most of the portgroups on the "Infrastructure DvSwitch" are pretty familiar (FT-Logging, HA-Heartbeat, IP-Storage, Management and VMotion), the "ConsoleProxy" is portgroup specifically required for vCloud Director. vCD has two network interfaces - one to gain access to the main management web-pages, and the second that's used to broker "Remote Console" sessions to individual VMs (that make up vCD backed vApps). I imagine most people would carry on using Microsoft RDP and SSH/PuTTy to gain access to their VMs even if they reside in a vCD - so this console access is really there for folks who need console access for what reason (such as troubleshooting the boot process of Windows perhaps).

The other thing I need to consider is how to handle my physical network. To be honest its in a bit of shambles right now. My first lab had single 1GB 24-port NetGear switch that was "unmanaged". That is to see it was just a switch with no VLANs - it quickly evolved from being "unmanaged" to being "unmanageable". After a year or so I had a few dead ports and I was also running out of them. So I replaced it with 1GB 48-port NetGear switch with management. Rather stupidly I didn't VLAN the network. A little bit later I was loaned a Dell PowerConnect switch when the Equallogic team loaned me two Dell Equallogic Arrays. In the end I just uplinked the two switches together (again no VLANs) and went with that. A little bit later I did some "management" by separating the VMnic's that serviced IP storage on two Dell PowerConnect switch along with my all my storage (two NetApps & two Equallogics) - essentially physically separating the IP-based storage traffic onto a dedicated physical switch.

All that gives me a configuration change to make. Firstly, I want to ensure the traffic that is servicing VMKernel traffic is on the Dell PowerConnect switch - and the VMnic associated with the "Virtual DataCenter DvSwitch" on the NetGear. I would like to enable those NIC's for VLAN trunking and create a series of VLANs across it. That's something I've never done before with the NetGear - but I'm sure I can learn. It can't be that hard. There are MANY different ways of segmenting the network with vCloud Director - and one of them is by using pre-populated network pools of VLANs, and I want to be able to play with that configuration properly in the lab. There is an obivious workaround - and that's to use VXLAN and use that to segment the network instead - but I want to keep my options open - so I can use ALL the features that vCD offers, and not allow the lab to restrict my options.

I guess there's nothing mandatory about you lay out your switch configuration. I did consider for while keeping all the "infrastructure" components on Standard vSwitches on each host. I'm glad I didn't because I'm already feeling the benefit of using the Distributed vSwitch for this layer of my network. For example I initially forgot that vCloud Director would need two NIC for its configurations - so I was able to add a portgroup for the Console Proxy very quickly. And I think that's where the change will pay dividends whilst I learn more about vCloud Director. As something comes up in the network layer I hadn't consider or thought of it will be very quick and easy to change the configuration - where gaps in my knowledge made me choose the wrong option or setup.

Of course that means doing something I've been avoiding all along - visiting the colocation. Right now I'm trying to do everything I can remotely. Although I guess I could rig myself as "Update Manager" configuration together and do that remotely as well. The other physical job I have is fix up a 5th server to join the "Gold" Cluster. I need a physical NIC purchased so I can bring its NIC count up to 4-pNICs which is my minimum for any ESX host. So the tasks needing me to visit the colocation are beginning to show. One thing I that I thought I would have to do is make a trip to the colocation to upgrade my ESX hosts. Half of my ESX host don't have ILO/DRAC access on them - and they are in need of being upgraded from ESX 5.0 to ESX 5.1. As I no longer have a VUM setup in my lab I thought doing it remotely would be out of the question. However, I hadn't counted on my colleague William Lam coming up with a method of upgrading ESX 5.0 to ESX 5.1. The way his method works is putting a host in maintainance mode; temporarily opening http/s on the ESX host and then pulling the install bundled directly down from the web with ESXCLI. Worked for me!

Version history
Revision #:
1 of 1
Last update:
‎10-01-2012 01:08 AM
Updated by: