I'm hoping to get some design input on a new platform being built.
Dual datacentre design, roughly 15m / 20km apart with dark fibre connectivity for; backups, storage replication and layer 2 stretched VLANs.
One datacentre is a brand new greenfield deployment, the other is an existing environment.
All systems being deployed are new, so standing up something separate and different from the existing vSphere platforms.
Utilizing high dense blade infrastructure for 95% of the environment, some rack servers for specific use purposes including a management cluster.
vCenter solution deployed in both sites in an active/active configuration.
vCenter SRM incorporated to provide fail over for solutions not clustered at the application layer and/or not making use of Microsoft clustering.
vCenter View deployment for 1000 seats of VDI, active both sides.
SQL 2012 and 2008 clustered solutions, 2012 is virtual and utilizes a dedicated high performance ESXi cluster, 2008 uses physical blades. SQL 2008 is designed this way for licensing purposes, avoidance of HyperThreading issues/support and to avoid RDMs which my client as a strong preference not to use for support reasons.
Cisco Nexus 9Ks operating in 7K mode for Access, Distribution and Core networking all running at 40Gb.
Storage platform providing both block and file services (NAS front end for the SAN) in both sites, fully capable of replication for the SAN and NAS.
Management cluster in each site utilizing 3 x 2U rack servers with a ton of local disk and utilizing VMware vSAN to create the shared storage they will run on. Not interested in presenting SAN or NAS storage to the management cluster as a datacentre cold start means core management functionality cannot be restored until the SAN comes back online. Yes we have had this issue a couple of times now, no we shouldn't have to have that experience ever, nevertheless I'm designing against it. SQL services for management solutions will have a dedicated clustered SQL 2012 solution running on the management cluster, separate from the general purpose application SQL 2008 and 2012 solutions.
Okay enough background. My question relates to the design and implementation of a highly available PSC component for the environment. When I look at the following blog I feel like I'm missing some key points;
We would rather not have to use MSCS to make vCenter highly available at the application layer, not least of which we don't know how to would provide RDMs from the vSAN in order to get this working. Are there other alternatives or design considerations we are not seeing? Can I make vCenter work on top of MSCS on top of a management cluster utilizing vSAN without presenting storage from the SAN or NAS? Should I switch to a NAS device under the rack servers to present the shared storage? This would be hard to achieve btw because of support issues (currently outsourced bau support for storage
So if we go with the assumption that we won't put vCenter on top of MSCS how does the rest of the design stack up? Do we just use the "Multiple Site vCenter Server and PSC basic Architecture" and if so what happens if the single PSC in site A dies? Will the systems in site A automatically switch to using the PSC in site B and if so, why would we need two PSCs behind a loadbalancer? What problem is VMware solving there? I can only assume at this point in time that if the local PSC goes down then the vCenter environment and all it's components go offline or become unusable as well.
Any ideas, comments and/or input is most welcome.
Message was edited by: Paul Kelly Included note about SQL instance inside management clusters.
Tthis is a similar to a design I have put together, but now can't get to work. So a Vcenter can only be registered to one PSC or a load balanced VIP for multiple PSCs. Therefore my design was to use GSLB to load balance between a PSC in each of our two VCenters. However in the F5 article http://kb.vmware.com/selfservice/search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=2...
IT says the VIP must be in the same IP range as the nodes which is suggesting to me they only support load balancing within a site and not across sites. I doing understand why this is as technically it should be all possible. I'll speak to VMware support to find out but there is no documentation on this setup I man trying to achieve.
ON top of that I can't actually get the setup to work. I man now half tempted to build two separate VCenter all-in-one instances, in separate SSO domains. I'm now struggling to see the downsides of his. it would effectively be setup like VCenyer 5.0. Reality is each SSO domain would need to be setup with an AD source, but once that is done users can login to either using their AD accounts and it removes all this complexity
Thanks for the response. So in order for vCenter1 in Datacentre1 to make use of the PSC in Datacentre2 the vCenter servers would need to talk to the PSCs via a Load Balancer where the VIP and PSC IP addresses are all in the same subnet/vlan?
It's possible for us to do that with stretched VLANs between the datacentres.
DDon't take that as gospel, it's just what I'm trying to do, and failing. Bear in mind you would have to use GSLB also on the load balances to allow for a site failure, taking your primary load balancer out too. I have a call logged with VMware but suspect they won't be able to advise