Hi all.
Just curious how people are dealing with DHCP leases for their linked clones (non-persistent pools).
If you are destorying the machines after use and a new one is created, a new IP address from DHCP is taken (but the previous one remains until the lease is up).
Eventually, there will be no IP addresses available to give out as they will all be leased to machines that no longer exist.
Any suggestions on how people have their lease times set?
Any recommendations on non-persistent pools for stopping this action?
Thanks!
A couple of things to consider MrB:
1. How many addresses are in your Address Pool? This question really goes to how large your subnet is. For example, my subnet is 23 bits, or 255.255.254.0, giving me just short of 512 address to use. In my networking plan, I have set aside half of that for servers and services that need static IP addresses, and the other half, or just less than 256 addresses, are in the DHCP address pool. You may have a fewer or greater number of addresses to play with. Saying that, you should compare the number of clones you have running at any one time to how many addresses you have, and then consider my next point.
2. How many non-persistent clones are you running, and how often are they being used (and then destroyed)? If you find that your users are burning through a few clones at a prodigious rate, you'll have to set the lease duration to a lower number. If they're burning through a lot of clones that duration will have to be really low.
Right now, I have a lot of address for just a few non-persistent clones (7) and just a few users (22). My lease duration is 8 days. So far, that's been sufficient. As my environment expands with more clones and more users, I expect to reduce that duration.
Thanks for the reply!
We are evaluating for now, which means we have plenty of room to play; with all the machines we keep logging on and off (with destroy settings) during testing, we are using the leases up a little. No problem in the test env, as we will just drop the lease time.
In production, when we complete our first full roll out, we will be looking at 600 machines and upwards of 1000 users.
This will therefore be around 3 - 4 subnets (excluding top ranges for routers/printers etc) for VMs alone.
This will likely increase over time.
Open access machines will be non-persistent and physical access to terminals will be available during core hours (8:00-18:00). Users will roam and grab any terminal that is available. I guess we will have to tweak "on the fly" as we see clients being used during the pilot stage - so we will have to reduce the duration like yourself as you expand.
Thanks for the feedback - anyone else out there have larger numbers and DHCP recommendations?
This is an interesting discussion...it makes me wonder:
- is there a facility for VI3 (vCenter, View Manager, etc) to act as a NAT / internal DHCP so that you wouldn't have to worry about burning through "real" addresses?
- if VI3 can do a NAT back to cloned desktops, how does that affect resources like FlexLM licensing?
- are there any plans to support IPv6 addressing
Thanks,
Phil
From a networking standpoint, you'd need a network boundary setup to do that. That's typically done with a router (or Layer 3-capable switch). VMware Lab Manager has the capability to "fence" clones by creating tiny (Linux-based) router VMs and virtual switches, but you still have to provide the DHCP IP addresses that will be fenced. Oddly enough, VMware Server and Workstation have the ability to set up at least a single NAT/DHCP boundary, but VI3 does not - I've often wondered about that.
Now that I'm thinking about it, there's probably no reason a person couldn't home brew their own router/NAT/DHCP server with Windows RRAS (or Linux equivalent) and adding, if necessary, the appropriate number of virtual switches. Even though linked clones have to be set to use DHCP, a script run by VMware Tools could point the guest's default gateway at the RRAS server.
In my opinion it would be easiest to make a seperate VLAN for your desktops and create a DHCP range for them seperate from your server network. This way you contain any problems to your Virtual Desktop VLAN and you keep all of your routing on the Routers/switches you are using to do your routing and switching.
Thanks for the replies so far.
There are some interesting points... if anyone else wants to comment please do.
Currently we are still in our evaluation phase so we haven't run into these issues as yet, but will definately have to show that we have to think about them when we complete the eval.
Thanks!