In my recent whitepaper on Java performance, I used tc Server 2.0 as the deployment platform. The underlying web container for tc Server 2.0 is Tomcat 6.0.24. In the course of my tuning experiments, I gathered some interesting results about the performance benefit of switching to Tomcat’s NIO HTTP connector. These results are not specific to the virtualized environment, and I thought that they would be worth sharing.
The Tomcat HTTP connector provides a connection from clients using the HTTP protocol to the web applications. Tomcat has three different implementations available for the HTTP connector. There are the default Java HTTP connector, the high performance Java NIO HTTP connector, and the native APR HTTP connector. \
I compared the peak throughput of my application (Olio) for native and virtual configurations with a single CPU using the default HTTP connector and the NIO HTTP connector. The results are summarized in the chart below. Switching from the default HTTP connector to the NIO HTTP connector resulted in a 150% to 200% increase in peak throughput. No other tuning was done to achieve this increase.
Note that Olio is a very network intensive workload. At peak-throughput, the 1-vCPU VM was, on average, receiving 85Mbps and transmitting 161Mbps. The benefit of switching to the NIO HTTP connector would likely be lower for less network intensive workloads.
I learned two important lessons from these results. The most obvious was that if you are using Tomcat, you should not be using the default HTTP connector. The NIO HTTP connector can give an impressive performance boost, and the configuration changes needed to enable it are trivial. I did not try the native APR HTTP connector.
The second lesson was that the performance overheads introduced by virtualization are trivial compared to the potential impact of misconfiguration and poor tuning. The same lesson could be drawn from my experiments on the performance impact of maximum heap-size. I also saw improvements from database and file server tuning that, while less dramatic, also overshadowed the virtualization overhead. While this isn’t really a surprise, it is useful to keep in mind when trying to improve the performance of virtualized applications.