I was recently involved in a situation where a development group at a SMB was tasked with consolidating two SQL servers into one. These SQL servers were physical servers running old versions of Windows, with equally old versions of SQL Server, on some pretty old hardware. The situation became interesting, when the development group put in the specifications request for the new virtual machine. The request was for a 64-bit Windows 2008 server with 8 GB of RAM, 4 vCPUs and over half a TB of FC SAN disk for storage.
This seemed like a bit of a tall order, so the first thing I did was to compare the specifications in the request with the specifications of the current physical servers. Server 1 had two Pentium III 1.2 GHz processors with 512 MB of RAM and 90 GB of used disk space. Server 2 had two Pentium III 1.0 GHz processors with 2 GB of RAM and 16 GB of used disk space. Even after ignoring the massive storage difference, the requested numbers didn't match up with the 4 vCPUs and 8 GB RAM specified in the request. Next I went to the system baselines, thinking that the systems might be overburdened. The baselines actually revealed that the systems weren't doing much work - 3% CPU average utilization, low disk IOPs, and very low network utilization. Using the perfmon SQLServer:Memory Manager -> Total Server Memory counter did reveal that the SQL servers were actually using the memory they were allocated. The numbers in the request still didn't add up though, and now with data in-hand, it was time to go talk to the requestors.
Ultimately it was discovered that this request was submitted this way to "allow for future growth." Many years ago this may have been standard practice with physical hardware, but in today's virtual environments it just no longer makes any sense. Based on the baseline data, the requested virtual machine could be built with 1 vCPU, 3 GB of memory and less than 100 GB of SATA disk space. If it turns out that the server actually needs more resources in the future, then these resources may be very quickly added with minimal or even no downtime. Gone are the days of provisioning everything up front while allowing room for future growth, hoping the server makes it to the next refresh cycle and then repeating the same process over again. To complement the virtual infrastructure, there must be an awareness of the way this technology fundamentally changes how systems are now provisioned.
Thanks for reading!