Hi
I have a vSphere 5 ESXi solution where I host all my virtual servers, with a virtual vCenter Standard managing them. I am in the process of working out what I need for a VMWare View Setup and I was hoping someone here could help.
I have 12 physical blades to host my VMWare View solution (after calculations for number of desktops) and I have the VMWare View Bundle licensing.
Ive been reading through the notes and it states that this bundle comes with a vCenter edition.
A couple of questions :
Should my existing vCenter instance be used to manage the VMWare View Hosts or should I install a seperate vCenter instance for VMWare View and use the license in the bundle ?
I assume I should be installing the ESXi instances in the bundle that will allow me to provision desktops only ?
If I have Connection and Composer servers, should they be hosted in the VMWare View Clusters or the existing vsphere 5 server cluster ?
If this is the case, I also have a question regarding the networking setup. When I install the ESXi for VMWare View, will I still have a requirement for a management console, vmkernel and "Front End" (for users ip address) vnic's for the vswitch ?
One more question regarding round rbin DNS. Im looking to also install 2 connection and 2 composer servers which are load balanced. Does anyone have any info on setting up the round robin in DNS ?
Many Thanks
We use a 6 physical NIC configuration with network based storage.
You'll have a default port for management. vMotion is split off to its own IP, and these two vmkernel ports are on the same vSwitch, with pNICs used by one, and standby for the other.
iSCSI should be separated to its own vSwitch and pair of NICs as well backed by bound vmkernel ports for redundancy and load balancing.
That leaves at least another vSwitch and pair of pNICs for actual vm traffic.
-KjB
Message was edited by: kjb007 : Clarified vSwitch
We use a separate vCenter for desktops. This makes it much simpler when we need to perform maintenance. View uses the vCenter differently as well, so separating vCenter was an easy choice.
thanks for the reply. I assume the install is pretty much that of whats involved for vSphere 5 ?
It is typically best practice to separate the two out whenever possible. Along with what KJB said about performance monitoring I also felt it was simpler to assign permissions to our desktop management team without fear of them being able to access server infrastructure.
The install is the same, the license key is what limits you, all else remains the same.
In our "pod", we run our connection brokers as well. Pretty much what we need to support our desktops runs within that pod.
-KjB
Just an update on this
Ive had most of my questions answered on this but was just wondering about the vSwitch setup.
Is it Management Port, VMKernel for VMotion and VMKernel for ISCI that is needed ?
Thanks
We use a 6 physical NIC configuration with network based storage.
You'll have a default port for management. vMotion is split off to its own IP, and these two vmkernel ports are on the same vSwitch, with pNICs used by one, and standby for the other.
iSCSI should be separated to its own vSwitch and pair of NICs as well backed by bound vmkernel ports for redundancy and load balancing.
That leaves at least another vSwitch and pair of pNICs for actual vm traffic.
-KjB
Message was edited by: kjb007 : Clarified vSwitch
Thanks for your help