I have just passed my first certification this week (VCA-DCV) and now want to try and put it to use.
I have 3 existing ESXi hosts all with live running VM's. 2 of the hosts are 5.5 one is 5.0 (To be upgraded shortly).
My question is regarding clustering, we currently have Essentials licensing and i am looking to upgrade to essentials plus for the vMotion capabilities (Now that 5.5 is released you can vMotion using DAS i think).
I have created my Cluster and enable fully auto DRS and HA. My first question is surrounding EVC mode, all my machines are HP Proliant DL380 machines 2 are G7's 1 is a G6, i am looking to try and get a clearer idea of what EVC is. I understand that it is a CPU level that the hardware presents to the VM's to maximize compatibility when vMotion'ing but what i don't get is if you apply it to hosts whats to stop you adding AMD and Intel hosts and setting the level to Intel Sandy Bridge for example?
My understanding is thus:
When you enable EVC for AMD's hosts for example ONLY AMD hosts will be permitted to join the cluster and similarly if you enable EVC for Intel only Intel Hosts will be permitted so if you enable EVC for Intel you would not be able to add an AMD host to the cluster and vice versa?
So if this is correct what is the EVC Mode for where you specify a family of CPU's ie. "Sandy Bridge", "Merom" etc.
Next question is about Resource Pools, i understand them on the whole but what is peoples advice when adding existing hosts running live VM's to newly created clusters should i add the hosts resources to the cluster root pool or maintain and "graft" them?
Can someone explain what the pros and cons of each are considering all my hosts are using DAS and i am not using a SAN.
Finally my last question is about vMotion itself, if i am using DAS for datastores (3 hosts, 3 datastores) will i need to do anything different when setting up vMotion and DRS? all hosts are on the same network, they can all see each other and access each other fine, they are all in the same vCenter and can all be managed centrally. Will it be as simple as adding all 3 hosts to a DRS enabled cluster and hey presto the DRS works intelligently along with vMotion? or will it be that the DRS will work fine but i need to do something differently for the vMotion to work? or will it be that both components will need special config considerations? i vaguely remember something online talking about either VSAN or VSA?
Sorry for the essay but i could do with these points clearing up before i do anything further and i'm sure somewhere on here there will be a VMWare wizard who will be able to answer this instantly!
Well done on passing your VCA-DCV!
When you enable EVC on your cluster you will need to power off all the VMs on the hosts or move them to a spare host outside your cluster.
Also when you enable EVC it will check that all your hosts are the same architecture, e.g. Intel or AMD, as mixing them isn't allowed.
You can then migrate your VMs back in (or power them back on)
If at a later date you try to add an AMD host to your Intel based host cluster, adding the host will fail. Simples.
Where do you change the setting?
Follow that link for how to set the compatibility
A big complex topic, too big for an answer here.
Read this blog post by Chris Wahl
It has pancakes fighting in it too, which makes things easier to remember 🙂
Another big question "will it just work" 🙂
Well probably not if you've never used it before. If you can vMotion a VM between all your hosts which will be clustered today and you create a cluster and enabled DRS the answer is yes.
If you've never used vMotion before then you will need to ensure you have a vmkernel port and a vmotion network configured on each host.
Again "there's a post for that"
VSAN or VSA are topics for a few months or years down your VCA/VCP/VCAP development and your environment doesn't sound like it needs them yet 😉
Hi Neal, thanks for that massive response! Really appreciate it.
The EVC; it appears to let me add a host to my cluster just by dragging it in in the vCenter, and I also did this with a test lab and it let me move the host to the cluster without powering off VM's? Additionally, how do you know which processor mode to set ie Sandy Bridge, Nehalem etc do you just set it to that of your newest server? or the oldest?
Does vMotion require an independent IP/switch then? if it does then I will need to create a new vswitch remove a physical NIC from my current vswitch and add this to the newly created vswitch and give it an independent IP?
Or can I use my existing Virtual switch that all my VM's use and give that switch an IP address in the config. wizard and specify to use it for vMotion, and if so will this have a knock on effect for any of my VM's when NOT vMotion'ing?
Lastly my switches are configured using LACP to team the NIC's on each host, will this cause a problem with vMotion?
Thanks. for help so far.
EDIT: Just found the option for 'Supported EVC Modes' in the host summary tab. 2 of my hosts support 'Penryn', 'Nehalem' 'Westermere', and 'Merom' and 1 supports 'Merom', 'Nehalem', and 'Penryn' so I have set my cluster EVC mode to 'Nehalem', as this is the latest CPU type that all my hosts support.
You will need to look up the micro architecture of your CPUs for each host. CPUs are backward compatible for example Ivy Bridge will have the instruction sets from sandy bridge however sandy bridge will not have instruction sets of ivy bridge. So if you have both of those CPU you will have to set EVC to sandy bridge. See the below links.
There are lots of ways to setup vMotion and a lot of wrong ways to do it. See the section "Recommended vMotion networking best practices" That is the simplified answer that will work 70-80% of the time in small - medium size deployments.
Creating a VMkernel port and enabling vMotion on an ESXi/ESX host (2054994) - http://kb.vmware.com/kb/2054994
Also the best practices is good to read.
Does anyone have any advice of the usage of resource pools? when adding my existing hosts into a new cluster should I add the resources to the cluster pool or maintain the pools as is?
I guess it depends on what machines are on the host?
Does maintaining the RP means the machines on that host continue to have access to the resources they currently have access to on the individual host as opposed to having access to the resources of all 3 hosts if I was to pool select the 'Add this hosts resources to the cluster pool'
So I guess it depends what apps are on the servers, if they are all equally used file servers then I would be better adding them all to the same pool and let them balance out, but if they are DB servers or things that need a set amount of dedicated resources I would be better grafting the RP from the host.
That about right?
Hi again Ted,
It's really not an easy one to comment on without detail, but then it depends on what your resources vs requirements are and what you want to pool between what.
There is no simple formula for resource pools. You would need to analyse the CPU/Memory requirements of your VMvs and pool them appropriatley based on the cores and memory available on your cluster.