Skip to content
tfredrich edited this page Aug 23, 2012 · 22 revisions

The question was raised regarding the previously-published benchmarks as to why the NIO HTTP connector for Tomcat was used, suggesting that the APR connector is faster since it's native, compiled code. Fair enough--good job making me sweat, Tomcat experts! Now to setup a new test environment, this time on physical boxes (instead of virtuals like last time).

BTW, a special thanks to Pearson eCollege for providing the hardware, time and support for these tests, and for supporting the RestExpress project--RestExpress is in production at eCollege for many of their high-usage, high-availability RESTful services.

Server

  • 4-Proc Xeon E5620 2.40GHz, 24GB RAM (4-core with 4 hyper-threads per core)
  • Tomcat 6.0.35, configured with the APR connector. Configuration directions here.
  • Java jdk 1.6.0_33
  • RestExpress 0.8.0 (with 64 front-end NIO worker threads--4 per hyper-thread core) and no Executor (back-end) threads.

Client

  • 4-Proc Xeon E5620 2.40GHz, 24GB RAM
  • HttPerf 0.9.0, again configured and compiled according to these directions.

This script was used on the client to run HttPerf repeatably to generate the output files:

#! /bin/bash
#
cat /dev/null > $4
ulimit -n 65535

for i in {1000..10000..100}
do
	httperf --timeout=5 --client=0/1 --server=$1 --port=$2 --uri=$3 --rate=$i --num-conns=5000 --num-calls=100 >> $4

	portCount=`netstat -tna | grep $2 | wc -l`
	echo "Current Port Count:" $portCount

	if [ "$portCount" -ge "20000" ]; then
		echo "Pausing 3 minutes for connections to bleed off..."
		sleep 3m
	fi
done

Note the values in the 'for' loop (1,000 - 10,000). Since these machines are considerably beefier than the previous test environment, we had to generate considerably more load, since 40,0000 req/s didn't show any deterioration in performance, response times, etc. Additionally, because each run of HttPerf consumes 5000 connections/ports, and for whatever reason we didn't seem to get more than 30,000 ports or so available at a time, the script waits three minutes for connections to bleed off once a more-than 20,000 connections are in use (most likely in TIME_WAIT state) for the server port under test. I know there's a way to fix this with server config, but figured 30,000 was decent enough for these tests.

On an additional note, eCollege has done a significant amount of research on which JVM garbage collector to utilize for some of their high-availability Java applications. And historically, the 'ant run' target uses most of those settings for the RestExpress benchmarks. However, since we didn't take the time to determine appropriate memory settings for the server machine (as noted above, these machines had a LOT of memory) and make sure Tomcat had the same settings, the RestExpress benchmarks were executed with the default JVM settings as follows:

java -jar dist/echo-0.1.jar

So you'll notice some interesting spiking going on--presumably from GC activity.

Results

As mentioned above, HttPerf was run repeatedly, making 500,000 requests per test run, forcing a fair amount of connection re-use. The delta was the HttPerf --rate= parameter where the rate was incremented by 100 for each run. The results are charted below.

It is interesting to note, if only as an anecdotal sidebar, that running 'top' on the server during test execution showed the RestExpress benchmark utilizing approximately 8-10% of memory while Tomcat only used 0.5% to 1.5%. Tomcat peaked the CPU reported by Top at 400%--330%, nominally, while RestExpress hit a high of approximately 1200%. At least anecdotally, that matches my mental model of the agility behind RestExpress--and one of the main reasons RestExpress exists. I'm sure there's a way to configure Tomcat to get better utilization though--more threads, perhaps?

Okay. Here are the charts...

Request Rate (req/s)

The rate at which HTTP requests were issued to the server.

Connection Time (ms)

The average time it took to establish a TCP connection.

Reply Time (ms)

How long (in milliseconds) it took for the server to respond to the request.

Connection Rate (conn/s)

The number of connections per second issued by the test (and conversely, handled by the server).

Network I/O (KB/s)

The average amount of network traffic (in kilobytes per second) during the test.

First Conclusion

After seeing these results, we decided that the APR connector is "so 2005" and decided to re-run these same tests with Tomcat configured with the NIO connector.

The results are much better for Tomcat, however, it's interesting to note that we configured the NIO with 200 threads to get that much throughput and RestExpress is configured with 64, and given that it takes an hour to run the tests, I'm not concerned about RestExpress holding up and decide to forgo further RestExpress optimization...

Request Rate (req/s)

The rate at which HTTP requests were issued to the server.

Re-Conclusion

These tests relied heavily on connection reuse. And RestExpress is the clear winner in requests per second given this configuration, handling significantly more load. In the previous test runs, when connection reuse wasn't a factor (a lot of requests require a new connection), Tomcat shows more equality, but this use case is less important in the real world as clients should embrace connection reuse whenever possible.

RestExpress proved its claim of performance during these straight-forward 'echo' tests, holding its own against the standards-based Tomcat with either the APR or NIO connector. If you're looking for a lightweight framework with ease of scalability, flexibility, simplicity (e.g. no container), increased development speed (for CRUD RESTful services), clarity of code, automatic marshaling (for JSON and XML from your domain objects/DTOs), easier multi-node rolling deployments in an always-up environment, RestExpress is a definitely an option. And it's still my choice--duh!

BTW, you Netty experts may be able to help me tune RestExpress to be even "more faster." Drop me a line.