Skip navigation

vELOCITY

2 posts

Performance of certain applications such as databases running in a vSphere based virtual infrastructure can be affected when demand for memory increases beyond what is available on the host. vSphere uses complex memory reclamation techniques to acquire and reallocate memory to VMs that need it. Swapping memory pages of a virtual machine to a swap file located on physical storage media is one such technique. Swapping is known to have a negative effect on the performance of the application in the VM. The degree to which the performance is affected depends on the I/O performance of the storage media used to host the swap file. Recently, I ran a few experiments to study the possibility of using a solid state device (SSD) as swap storage in virtualized SQL environments.

 

Experiments showed that when the workload became active in a VM with a portion of memory swapped to disk, within 2 minutes the SQL workload performed at:

 

  • ~88 % of its baseline performance with the VM's swap file on SSD local to the host


  • 31% of baseline performance with the VM's swap file on Fibre Channel disks in a SAN


  • 17% of baseline performance with the VM's swap file on SATA disks local to the host


 

Methodology

 

A vSphere host with 2 VMs was used for the experiments. The host was in a memory overcommitted state, i.e. total memory allocated to the VMs was more than what was available for the VMs on the host. SQL server was allowed to use its own memory management techniques. The workload in each VM was configured to become active and run at random intervals. As the workload became active after the VM was powered on, the SQL data buffer grew in size. This in turn increased the demand for memory in the guest OS. vSphere attempted to allocate all the memory assigned to the active VM but could not as the host was in a memory overcommitted state. vSphere, eventually, resorted to memory reclamation techniques to reclaim memory from the other idle VM and allocate it to the active VM. Thus a portion of the memory allocated to the idle VM was either ballooned or swapped to disk.

 

Subsequently, when the idle VM became active, vSphere had to swap-in the memory pages that were swapped to disk earlier. The memory that was reclaimed before through ballooning had to be reallocated. To show the impact of host-level swapping alone on application performance, I collected application metrics for the first 2 minutes after the SQL workload became active in the idle VM. The results are presented in the next section.

 

Results

 

The graph in figure 1 compares the performance of the virtualized SQL workload (DVDStore version 2.0) when using different storage media - SSD, FC and SATA disks for host swap. In each case, before the workload became active, the VM was idle and a portion of its memory was swapped to swap storage. Performance of the SQL workload running in the VM whose memory was not swapped serves as a baseline for comparison.

 

Figure 1: Application throughput under memory pressure

 

throughput.jpg

 

From the graphs we can conclude that

 

  • With SSD as the swap destination, vSphere swapped-in the memory pages of the VM very quickly from the swap file. In this case, the latency of swapping back the memory pages had minimal impact on the performance of the SQL workload. Performance remained at ~88% of the baseline through out the period.


  • With the swap file on Fiber Channel disks in a SAN, it took a longer time to read the swapped memory pages. This affected SQL performance significantly with performance starting at 7% and ending at 31% of the baseline during the 2 minute period.


  • When a local SATA drive was used as the swap destination, it took the longest time of all to read the swapped memory pages. SQL performance started at 6% and rose to 17% of the baseline during the 2 minute period.


 

Conclusion

 

Spikes in memory consumption in a memory overcommitted scenario can cause vSphere to use host-level swapping for reclaiming memory from VMs. Host-level swapping is seen as unwanted because of its negative effect on performance. To avoid swapping, customers have to reserve more free memory which means less or no memory overcommitment. The experiments conducted in our labs show that the performance of virtualized SQL databases are not affected as much when vSphere swaps memory pages from SSDs as compared to that when swapping from rotation based drives. Thus, SQL databases running in a vSphere based virtual infrastructure with high memory overcommitment can see significant improvement in performance when using SSDs as swap storage.

 

Experimental Setup

 

Hardware Configuration:

 

  • Dell PowerEdge 2950


  • Intel Xeon CPU 5160 @ 3:00GHz, dual socket, dual core


  • 8GB Memory


 

ESX Configuration:

 

  • VMware ESXi 4.0.0 build-164009


  • Maximum memory that can be reclaimed through ballooning: 1536MB (configurable)


 

VM Configuration:

 

  • Windows Server 2008 (64bit)


  • Virtual Hardware Version 7


  • 2 vCPUs


  • 4GB Memory


  • SQL Server 2008 64bit with SP2


  • DVD Store Database (20GB version): 45GB (data) + 10GB (log)


 

Swap Drives:

 

  • SSD: 32GB (local)


  • FC: 4*146GB, 15K rpm FC drives configured as RAID-0 (SAN)


  • SATA: 2*146GB SATA drives configured as RAID-0 (local)


If you attended VMworld 2009 and listened to Steve Herrod's key note you would have heard him discussing how to make an ESX cluster (he refered to it as a giant computer) as efficient as possible. The key element to achieve this is Distributed Resource Scheduler (DRS). DRS uses on one of the most powerful features of ESX - VMotion for balancing workloads across datacenters. Steve called DRS a 'MATCHMAKER' - matching resource requirements of virtual machines with available hardware resources.

 

If you are interested in knowing more about the experiments and performance results refer to my blog article on VMware VROOM: Application Performance Improvement with DRS