six4rm
Enthusiast
Enthusiast

Cloud Pod Architecture and Load Balancing - Single Point of Entry

Jump to solution

Hi,

I'm looking at our DR set up for our View environment and Cloud Pod Architecture seems to be the way to go. One question that has arisen and I've not been able to source a definitive answer for is whether it is possible to use load balancing of some flavour to maintain a single point of entry into the View environment. Would I be able to have a Connection Server in View Pod A and a Connection Server in View Pod B and then stick a load balancer in front so that users still connect to view.domain.com and through load balancing they will hit either Connection Server? Would it be possible to use a Windows NLB for this?

What I want to achieve is to build a second View Pod at a different geographic location to compliment our existing set up at our Head Office. This new Pod will be live and actively used rather than a traditional DR set up whereby everything will sit dormant until the sh*t hits the fan. We maintain a single point of entry into the environment and the user gets brokered to their correct Floating Linked Clone desktop, whether it be in Pod A or Pod B.

Hopefully that makes sense.

Any questions just let me know.

0 Kudos
1 Solution

Accepted Solutions
VUTiger
Enthusiast
Enthusiast

Yes this is definitely possible.

We currently have a global namespace setup for both internal and external access.  It is presented bound to a VIP that we have on our Load Balancers (Radware currently).  This VIP points back to our real servers on both sides.

We then leverage Cloud Pod Architecture to setup Dedicated or floating pools.  People can login and hit either one side or both.

If you leverage GTM or Global Traffic Manager (Smart DNS Round Robin) it can apply weights or restrictions so when people only hit one side of your data centers until it is unavailable and then it hits the other side.

Please remember that App Volumes and RDS Apps are not supported, just the virtual desktops.

We currently have two pods between two sites, leverage cloud pod and have built the whole infrastructure in duplicate to ensure each site is isolated and independent.

We also leverage DFS-R on Windows File Services to ensure data is replicated and in sync between both locations.  Then with the use of DFS Namespace we weight each side so users at Site A hit Files on File Services at Site A, and vice versa.

F5 also has a Big-IP Virtual Edition that will work great and deploy GTM, LTM (Load Balancing) and APM (Access Policy Manager) for work both internal and External access for you.

View solution in original post

0 Kudos
3 Replies
VUTiger
Enthusiast
Enthusiast

Yes this is definitely possible.

We currently have a global namespace setup for both internal and external access.  It is presented bound to a VIP that we have on our Load Balancers (Radware currently).  This VIP points back to our real servers on both sides.

We then leverage Cloud Pod Architecture to setup Dedicated or floating pools.  People can login and hit either one side or both.

If you leverage GTM or Global Traffic Manager (Smart DNS Round Robin) it can apply weights or restrictions so when people only hit one side of your data centers until it is unavailable and then it hits the other side.

Please remember that App Volumes and RDS Apps are not supported, just the virtual desktops.

We currently have two pods between two sites, leverage cloud pod and have built the whole infrastructure in duplicate to ensure each site is isolated and independent.

We also leverage DFS-R on Windows File Services to ensure data is replicated and in sync between both locations.  Then with the use of DFS Namespace we weight each side so users at Site A hit Files on File Services at Site A, and vice versa.

F5 also has a Big-IP Virtual Edition that will work great and deploy GTM, LTM (Load Balancing) and APM (Access Policy Manager) for work both internal and External access for you.

View solution in original post

0 Kudos
six4rm
Enthusiast
Enthusiast

VUTiger, thanks for the rapid response.

That's great news that this configuration is possible. Where do you situate the load balancer though, if it's hardware of course, which I'm guessing yours is?

My next thoughts were actually around user profile data and you've pretty much just answered that one for me. Do you use Windows based DFS?

0 Kudos
VUTiger
Enthusiast
Enthusiast

Currently we have physical radware appliances with dedicated vADC (Virtual Application Delivery Controllers) for LAN and DMZ traffic.

We are evaluating BigIP Virtual Edition with F5 and with that it would be in our virtual environment but have multiple interfaces with LAN, DMZ, Etc.

As we are looking at the 'Best' package it would contain a firewall where we could leverage it to direct the right traffic is getting to right destinations.

You can get them in 1GB, 3GB and 10GB virtual models for bandwidth and fill a lot of roles.  Tie all that together with GTM modules between data centers and its a very clean solution.

We break out user data across a multitude of servers currently.  We have Home drives on one with a DFS-R partner in the other site.  This drive is also where we redirect the desktop, my documents, picture, favorites, etc.

This way its in one place that the user can access via VDI or Physical if needed.  Also serves as a clean backup and restore target for users.

We then leverage another server for Profile and Redirect data.  This is more the User specific settings that are not necessarily used like the desktop/mydocs but persona redirects it off of the desktop,  We also have a share on this server for user Roaming profile, outside of the others to ensure that roaming data is sync'd from their session between data centers.

Finally we have just started trying AppData\Local redirection to the Persona Redirects drive to see if we can persist some lync and outlook items that seem to not want to transfer into the roaming profile.

The fear of AppData\Local is corruption of exchange data and screwing up the end users system...however we are in Online only mode with Exchange so we hope this will not get corrupted in this state.

0 Kudos