2 Replies Latest reply on Aug 20, 2014 7:59 AM by JPM300

VCAP-DCD Study Resources

I am looking for some study resources that will help me practice calculating compute resources (cpu, mem & nic) for for a design scenario.

I had some questions on the exam that I don't think I answered correctly because I did not calculate resource correctly.

Thanks

Marc

• 1. Re: VCAP-DCD Study Resources

My experience on the exam was similar. Additional material on the topic would be appreciated.

• 2. Re: VCAP-DCD Study Resources

Hey guys,

I haven't attempted the DCD yet and I know you can't really give out examples but is this the kind of things you are looking for:

Example:

Company X Transport wants to run 400 VMs with 15% growth expected each year for the next 3 years

400 VMS

426GHZ CPU Capacity, 615GB Memory Capacity

Each of the current VM's will generate an average of 60 IOPS at 65/35 read/write split and will require 38GB of storage capacity.  Average network utilization for each worload is averaging around 60Mbps

Question:  Using a server with 16 cores(Dual 8 Core CPU) at 2.2GHZ/core and 96GB of RAM, how many servers are required to meet the capacity and growth expecations?

So 2.2GHZ per core with 16 cores per host, each host has a total of: 35.2 GHZ

35.2GHZ / 426 GHZ = 12.1 round to 13 Hosts to meet current CPU capacity of 400 VM's

Growth is 15% per year which is 60 New VM's per year(400*.15).  Total VM count after 3 years growth 400 + 180 = 580VMS this means the total CPU Capcity in 3 years will be 191.7 + 426 = 617.7.  This is under the assuming each VM average CPU consumptionis still only using 1.055GHZ(426/400)

35.2GHZ / 617 = 17.52 hosts round to 18 to meet current CPU capacity + Growth, however this is at 100%

Assuming we want each host to run at 80% lets recalculate:

35.2GHZ * .80 = 28.16GHZ per host will be 80%

80% utalization = 617GHZ / 28.16 = 21.91 hosts rount to 22.

Now with 22 hosts for this client our memory utlization is easy as 22 * 96GB = 2112GB

since we only need 615GB Memory Capacity at 100% we easily obsorb this, but lets calculate running hosts at 80% usage for memory as well.  96 * .80 = 76.8

76.8 GB per host * 22 hosts = 1689.6GB

So seeing this we are blowing over the memory requirements by 1TB essentially so in this design we could probably scale back the memory per host to something much smaller like 64GB of memory per host which would still leave 80% usage and have room for expansion.

Is stuff like this what you where reffering to or am I waaay off beat?

I also appologise if any math is wrong, I did this really quick.