#663 wisp sluggish via apachebench

liamstask Fri 10 Jul 2009

In my continued just-getting-started explorations, I tried firing up hello.fan web example. It comes up no problem and serves pages to the browser seemingly quickly. Then I thought I'd fire it up in apachebench (ab) just for fun to see how it performs.

ab -n 1000 127.0.0.1:8080/

results in a super sluggish wisp, that takes something like 10-11 seconds per request. Note that the above command does not issue any concurrent requests either. This is not a huge issue for me, as I was just playing with ab out of curiosity, but indicates that something is probably not happy in wisp.

brian Sat 11 Jul 2009

Promoted to ticket #663 and assigned to brian

brian Sat 11 Jul 2009

Ticket resolved in 1.0.45

There was a bug in my logic for checking for persistent connections where I was checking to ensure a 1.0 connection was not persistent. That fixes this problem:

C:\dev\fan\src>ab -n 1000 http://localhost:8080/

Server Software:        Wisp/1.0.44
Server Hostname:        localhost
Server Port:            8080

Document Path:          /
Document Length:        619 bytes

Concurrency Level:      1
Time taken for tests:   5.252 seconds
Complete requests:      1000
Failed requests:        0
Write errors:           0
Total transferred:      781000 bytes
HTML transferred:       619000 bytes
Requests per second:    190.39 [#/sec] (mean)
Time per request:       5.252 [ms] (mean)
Time per request:       5.252 [ms] (mean, across all concurrent requests)
Transfer rate:          145.21 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    2   6.4      0     140
Processing:     0    4   6.5      0      16
Waiting:        0    2   4.6      0      16
Total:          0    5   8.4      0     140

Then I tried the more interesting tests with concurrency and keep-alives. The way apache bench works seems fishy to me - it uses HTTP/1.0 but with a Connection header which doesn't seem correct to me, but I tweaked wisp to make it work. Plus it doesn't appear to support chunked transfer encoding, so your test has to ensure that Content-Length is used on the response. That test on my machine was:

C:\dev\fan\src>ab -n 50000 -c 50 -k http://localhost:8080/

Server Software:        Wisp/1.0.44
Server Hostname:        localhost
Server Port:            8080

Document Path:          /
Document Length:        14 bytes

Concurrency Level:      50
Time taken for tests:   70.745 seconds
Complete requests:      50000
Failed requests:        0
Write errors:           0
Keep-Alive requests:    50000
Total transferred:      8801584 bytes
HTML transferred:       700126 bytes
Requests per second:    706.77 [#/sec] (mean)
Time per request:       70.745 [ms] (mean)
Time per request:       1.415 [ms] (mean, across all concurrent requests)
Transfer rate:          121.50 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.5      0     109
Processing:    16   71  14.0     62     218
Waiting:        0   69  13.7     62     218
Total:         16   71  14.1     62     249

Don't know how that stacks up to other web servers, but seems pretty decent to me (wisp runs this site).

liamstask Sat 11 Jul 2009

I'd call this fixed for sure. Updated to latest and greatest and ran the hello.fan example again. Results after a warm up run on my MacBook Pro:

Server Software:        Wisp/1.0.44
Server Hostname:        localhost
Server Port:            8080

Document Path:          /
Document Length:        14 bytes

Concurrency Level:      50
Time taken for tests:   3.555 seconds
Complete requests:      50000
Failed requests:        0
Write errors:           0
Keep-Alive requests:    50000
Total transferred:      8800352 bytes
HTML transferred:       700028 bytes
Requests per second:    14062.75 [#/sec] (mean)
Time per request:       3.555 [ms] (mean)
Time per request:       0.071 [ms] (mean, across all concurrent requests)
Transfer rate:          2417.13 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.1      0       3
Processing:     0    4   3.4      3     143
Waiting:        0    3   3.4      3     143
Total:          0    4   3.5      3     145

Percentage of the requests served within a certain time (ms)
  50%      3
  66%      4
  75%      4
  80%      4
  90%      5
  95%      6
  98%      9
  99%     11
 100%    145 (longest request)

It's a simple page, of course, but I think those are pretty damn good numbers. Thanks for the fix!

Login or Signup to reply.