Just reading info on host sizing for the vcap dtd and it seems 8 vm's per core is the number.
So if i had two hosts with 2 x quad core procs in them, would that mean 64 vms' per host ? And if i wanted the host to be only 80% utilized, would that make it 32 vm's per host or is my math wrong
Not sure if 8 vms per core includes the host being 80% utilized or do you have to get 80% of 64 ?
Any ideas ?
The Architecture Planning guide says 8 to 10 VMs per core. So 8 is fine. However I think your math is a bit off...
64vm * 0.8 utilization = 51.2. So figure 51 or 52 VMs per host for 80% utilizaiton.... or you can calculate it as 8cores * 0.8 = 6.4 * 8vms per core = 51.2
sorry , that was a typo. 50% used gets 32 vms.
I have used the simulator on the vmware site and the scenario is very very basic. I have heard there are much more complex scenarios in the exam i.e. storage tiering etc.
How would you place the connectors between a host cluster / pool / storage ?
Yes, the test scenarios are definitely more complex. I failed my first attempt. I have designed multiple vSphere and View environments and I didn't take this nearly as seriously as I should. But to your question, I would connect a pool to a cluster of hosts and the hosts to storage. A pool can't get to storage without getting to a cluster first. you should need to know how many connection servers, security servers, vCenters, Composers, etc., etc. based on the scenario given. Remember that this is based on View 5.1 and not 5.2 which has some different sizing criteria for View Pods and Blocks..
At least I hear we get a calculator now. Having to do long division really ate up my time. I think I might have passed if I wasn't so rushed..
Thanks for the help
What is really throwing me at the moment is that the blueprints links are all to 5.1 except for the storage sizing - this links to the 5.2 document - this tells me that i can size for 32 host vmfs clusters for linked clones - would you agree ??
Also with regard to the connectors we spoke about - if it were a shared storage environment and all hosts were connected to all storage and we only had connectors between the pools and hosts and then from the hosts to the clusters, it wouldnt be clear what storage each pool would be on - does this make sense ? or should it be split in such a way that the host cluster only is connected to the storage that the pool is on and not all of them
you can have 32 hosts in a cluster for View 5.1. However you cannot put your replicas on a separate datastore when doing that. Tiered storage for linked clones (seperating the replica from the deltas) is limited to 8 hosts in a cluster.
I see what you are saying about storage. I recall seeing something like this on the exam. Let me think that through and get back to you.
I need to correct a previous entry I made here. I said:
you can have 32 hosts in a cluster for View 5.1. However you cannot put your replicas on a separate datastore when doing that. Tiered storage for linked clones (separating the replica from the deltas) is limited to 8 hosts in a cluster.
This is not correct. To use a 32 hosts in a View cluster that uses View Composer to create Linked Clones, you must put your replicas on an NFS datastore. Putting your replica on any other type of datastore limits you to only 8 hosts in the View cluster...
makes it all the more confusing when the blueprints architecture document is based on 5.1 but the storage document in the blue print is based on 5.2 which allows for 32 hosts on vmfs
Agreed, but I know how busy the certification and testing guys are. They've been putting out new exams (the VCA's) and have to get the VCAP5-DTA and VCDX-DT ready for release. I'd rather they focused on that.