2018-11-21
Over the years I've refrained from publishing Kore benchmarks.
A framework should be evaluated on other merits, but it seems a lot of people love numbers and I do understand why.
Don't get me wrong. Performance is important and I have always done performance testing myself without publishing anything about it.
But times change and I figured I could start sharing some of the numbers I am seeing using Kore. This way others can get an insight into whether or not Kore is going to stand in their way when building performance critical applications.
So here are some numbers, produced on an older test machine my $WORK provides for my testing. The benchmark tool used was the excellent wrk tool.
The machine used has 18 physical CPU cores. Both Kore and the wrk tool ran on the same host over the loopback interface with each of them using 8 workers/threads.
The Kore application simply responds with a 200 OK for each request sent. This was done since I just wanted the sheer throughput from Kore without any application logic in play.
No HTTP pipelining or other tricks were used.
Hardware:
os: Linux 3.19.0-generic (ubuntu 14.04.3 LTS)
cpu: Intel Xeon CPU E5-2699 v3 @ 2.30GHz
Kore configuration:
master commit 2d8874dd2a6322296b274e2fb0d9e38611c9e7d6
compiled with NOTLS=1
workers 8
worker_set_affinity 0
wrk configuration:
-t 8
-c 400
-d 60s
--latency
Translation: 8 threads, 400 concurrent connections, 60 seconds and show
the latency table. These settings maxed out CPU time for wrk completely.
The numbers are an average of 10 runs that were each 60 seconds in duration. The script below was used to produce 10 run-<id>.log output files.
#!/bin/sh
for i in `seq 0 10`; do
wrk -c 400 -d 60s -t 8 http://127.0.0.1:8888 --latency > run-${i}.log
done
The average requests / second was calculated using awk:
grep "Request" run-* | awk '{ total += $2; cnt++ } END { printf "%d", total/cnt} '
Requests/sec on average: 1.211.391
Below you will find each run and its output in detail, including latency details.
I am happy with these numbers. There are most certainly frameworks out there that benchmark better.
Hopefully these numbers provide an insight into whether or not Kore is going to be the bottleneck of your application. (Hint: very unlikely).
run-0.log
Running 1m test @ http://127.0.0.1:8888
8 threads and 400 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 421.10us 3.01ms 204.07ms 99.89%
Req/Sec 148.41k 15.97k 187.32k 76.60%
Latency Distribution
50% 291.00us
75% 446.00us
90% 651.00us
99% 1.08ms
70796292 requests in 1.00m, 11.60GB read
Requests/sec: 1177973.09
Transfer/sec: 197.72MB
run-1.log
Running 1m test @ http://127.0.0.1:8888
8 threads and 400 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 336.10us 1.15ms 200.49ms 99.95%
Req/Sec 152.66k 13.04k 182.92k 81.42%
Latency Distribution
50% 292.00us
75% 427.00us
90% 548.00us
99% 0.88ms
72711125 requests in 1.00m, 11.92GB read
Requests/sec: 1209841.41
Transfer/sec: 203.07MB
run-2.log
Running 1m test @ http://127.0.0.1:8888
8 threads and 400 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 335.46us 763.36us 203.27ms 99.95%
Req/Sec 148.97k 12.61k 174.78k 73.54%
Latency Distribution
50% 307.00us
75% 432.00us
90% 548.00us
99% 848.00us
71116026 requests in 1.00m, 11.66GB read
Requests/sec: 1185038.70
Transfer/sec: 198.90MB
run-3.log
Running 1m test @ http://127.0.0.1:8888
8 threads and 400 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 390.46us 2.68ms 218.26ms 99.92%
Req/Sec 154.10k 20.12k 189.43k 75.23%
Latency Distribution
50% 282.00us
75% 426.00us
90% 586.00us
99% 1.10ms
73330117 requests in 1.00m, 12.02GB read
Requests/sec: 1220133.80
Transfer/sec: 204.80MB
run-4.log
Running 1m test @ http://127.0.0.1:8888
8 threads and 400 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 377.51us 2.33ms 221.44ms 99.91%
Req/Sec 152.89k 15.82k 180.83k 83.62%
Latency Distribution
50% 291.00us
75% 432.00us
90% 598.00us
99% 1.00ms
72668374 requests in 1.00m, 11.91GB read
Requests/sec: 1209140.80
Transfer/sec: 202.95MB
run-5.log
Running 1m test @ http://127.0.0.1:8888
8 threads and 400 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 324.66us 844.24us 208.15ms 99.95%
Req/Sec 157.33k 12.72k 177.70k 88.93%
Latency Distribution
50% 285.00us
75% 410.00us
90% 540.00us
99% 0.91ms
74802213 requests in 1.00m, 12.26GB read
Requests/sec: 1244625.27
Transfer/sec: 208.91MB
run-6.log
Running 1m test @ http://127.0.0.1:8888
8 threads and 400 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 384.45us 2.31ms 203.10ms 99.92%
Req/Sec 151.01k 12.44k 183.82k 76.78%
Latency Distribution
50% 288.00us
75% 432.00us
90% 628.00us
99% 1.08ms
71952093 requests in 1.00m, 11.79GB read
Requests/sec: 1197204.05
Transfer/sec: 200.95MB
run-7.log
Running 1m test @ http://127.0.0.1:8888
8 threads and 400 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 339.94us 1.55ms 203.66ms 99.94%
Req/Sec 153.25k 9.85k 175.35k 87.69%
Latency Distribution
50% 300.00us
75% 421.00us
90% 536.00us
99% 731.00us
73004402 requests in 1.00m, 11.97GB read
Requests/sec: 1214720.03
Transfer/sec: 203.89MB
run-8.log
Running 1m test @ http://127.0.0.1:8888
8 threads and 400 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 327.40us 1.02ms 199.96ms 99.95%
Req/Sec 154.39k 12.45k 183.59k 90.30%
Latency Distribution
50% 289.00us
75% 421.00us
90% 553.00us
99% 794.00us
73217826 requests in 1.00m, 12.00GB read
Requests/sec: 1218267.61
Transfer/sec: 204.48MB
run-9.log
Running 1m test @ http://127.0.0.1:8888
8 threads and 400 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 371.84us 2.35ms 218.94ms 99.92%
Req/Sec 152.79k 11.58k 184.06k 79.78%
Latency Distribution
50% 293.00us
75% 427.00us
90% 577.00us
99% 0.92ms
72999492 requests in 1.00m, 11.97GB read
Requests/sec: 1214644.62
Transfer/sec: 203.87MB