Quantcast
Channel: Busylog.net » by example
Viewing all articles
Browse latest Browse all 19

HTTP benchmarking tool (Apache – ab tool)

$
0
0
result.plo

result.plo  300x225 HTTP benchmarking tool (Apache   ab tool) network by example

Here in this post about ab  :
- #1 Overview (examples)
- #2 Output (Errors) … try to explain results/errors
- #3 gnuplot output (-g option) … try to explain output

#1 Overview

ab utility http://httpd.apache.org/docs/trunk/programs/ab.html

according to ab description : ”ab is a tool for benchmarking your Apache Hypertext Transfer Protocol (HTTP) server. It is designed to give you an impression of how your current Apache installation performs. This especially shows you how many requests per second your Apache installation is capable of serving.

So ab is useful in order to have an impression about behaviours of an HTTP server. Sure there are several other tools … maybe better than ab but it is widely available and it is a good tool for a first overview of server.

Some use cases ( -> ab is also useful in order to) :

  • tuning / try to hit limit of http server : for example after a unsatisfying test with max parallel connections set to 500 we can discover  issue related to file descriptors with low limit set on user that runs HTTP server (FD_SETSIZE).
  • discover  leakages : huge number of requests can make evident memory leakage on server … or TCP wrong handling (several TCP connections in wrong state when you run netstat …)
  • try to misure “ideal box” : check performance of server without network entropy (just run ab on localhost)

#1.1 Example to use (TEST SOAP)

ab  -g result.plot -n 10000 -c 400 -p post-soap.data -H "SOAPAction: \"\"" http://<host>:<port>/axis/services/UserProvisiong

Total of 10000 POSTs (with post data from file post-soap.data) using parallelism 400 requests at a time.

where :

  • n number of requests to perform for the benchmarking session. The default is to just perform a single request which usually leads to non-representative benchmarking results.
  • p file containing data to POST.
  • c number of multiple requests to perform at a time. Default is one request at a time.

Notes :

  • in order to check (display) request/answer transaction you can use -v 4 option (ab -v 4 -g result.plot -n 10000 …)
  • must be n >= c : ab runs max c connections at a time. Example if n==c when ab starts will opened  immediately c parallel connections and will not reopened new when one connection is closed.
  • content of POST file (-p post-soap.data) could be obtained by a TCP dump of a working request (or with -v 4 option).
  • HEADER that you need to add (-H “SOAPAction: \”\”")  could be obtained by a TCP dump of a working request.

#1.2 POST data file ( -p post-soap.data)

Example (in my case) :

<?xml version="1.0" encoding="UTF-8"?><soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"><soapenv:Body><ns1:getUserState soapenv:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/" xmlns:ns1="urn:mmprov"><ns1:arg0 xsi:type="soapenc:string" xmlns:soapenc="http://schemas.xmlsoap.org/soap/encoding/">giovanni</ns1:arg0></ns1:getUserState></soapenv:Body></soapenv:Envelope>

#2 Output

ab  -g result.plot -n 10000 -c 400 -p post-soap.data -H "SOAPAction: \"\"" http://<host>:<port>/axis/services/UserProvisiong
Server Software: Apache-Coyote/1.1
Server Hostname: 127.0.0.1
Server Port: 18080

Document Path: /axis/services/UserProvisiong
Document Length: 387 bytes

Concurrency Level: 400
Time taken for tests: 76.298081 seconds
Complete requests: 10000
Failed requests: 52 <===
(Connect: 0, Length: 52, Exceptions: 0) <===
Write errors: 0
Total transferred: 6008592 bytes
Total POSTed: 7237704
HTML transferred: 3849876 bytes
Requests per second: 131.06 [#/sec] (mean) <===
Time per request: 3051.923 [ms] (mean) <===
Time per request: 7.630 [ms] (mean, across all concurrent requests) <===
Transfer rate: 76.90 [Kbytes/sec] received
                      92.64 kb/s sent
                      169.54 kb/s total

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 131 954.3 0 20998
Processing: 49 1228 3468.8 754 69911
Waiting: -1366713713649 -2147483648 98303465119.9 753 69910
Total: 49 1359 3913.2 755 74423

Percentage of the requests served within a certain time (ms)
50% 755
66% 775
75% 805
80% 821
90% 905 <===98% of request was served with time < 905 ms
95% 3829
98% 9135
99% 14277
100% 74423 (longest request) <=== all request under 74423 ms

What I figured out about Errors :

- Connect : requests failed due to connection drop. This counter is increased if error happen (for example) meanwhile ab tries to open socket to server. Could be  related to network issues or wrong server/cient tuning (for example check ulimit). You have to consider limit on server as well as  on client where you run ab.

- Receive : requests failed due to broken read. Errors which are happened when ab tries to read data from socket.  Could be related to Network issues or maybe application load issues. On the other side of socket (your server) connection is suddenly disconnected in the middle of  session (connection reset by peer).

- Exception : requests failed due to exception. It isn’t very clear to me: seems to happen when socket is in a specific status of connection and after a socket read from server we get an inspected result… but really isn’t very clear to me.

- Length : requests failed due to unexpected response length. Length errors counter is increased if a response has different length respect the first response received at start of test (and used as length reference). For dynamic generated pages, if document length between  responses isn’t constant, all server answers  are considered as error. In fact it isn’t necessarily a error condition that you have to consider. Instead in some other cases this could be evidence of errors that you have to consider (if your SOAP response, for example, has always the same length then changes on lenght could highlight  exceptions on server side that changes the response length). Network issue isn’t generally involved here ( warning to proxy presence).

- Non-2xx errors : requests with invalid header or too big or non “HTTP/1.1 2** OK” (HTTP result code doesn’t start with 2 like HTTP/1.1 404). That error could be evidence of issues on server due to load (such as 500 internal error server). Network issue isn’t involved here ( warning to proxy presence).

- Write errors : number of broken pipe writes. Errors meanwhile ab sends data over socket (send request is failed).

Note : Some errors  are showed to output only if their occurrences are > 0.

useful values :

  • Failed requests: 52 <=== (see above description of errors)
  • Requests per second: 131.06 [#/sec] (mean) <===
  • Time per request: 3051.923 [ms] (mean) <===
  • Time per request: 7.630 [ms] (mean, across all concurrent requests) <===
  • 90% 905 <===90% of request was served with time < 905 ms

#3 gnuplot (-g option)

For gnuplot on OSX you need to install gnuplot :-) and XQuartz if you want to have X11 graphical output.
http://www.miscdebris.net/upload/gnuplot-4.2.5-i386.dmg
http://xquartz.macosforge.org/landing/

in order to use graphical windows you need to export :

export GNUTERM=x11

if you don’t export TERM you will get this error :

gnuplot: unable to open display
gnuplot: X11 aborted.

Btw ab generate file result.plot ( -g result.plot ) with following content :

starttime seconds ctime dtime ttime wait
Tue Apr 23 11:33:59 2013 1366709639692461 1 58 59 57
  • starttime :  the time that this request started
  • seconds : starttime expressed as unix timestamp
  • ctime: connection Time -> time first that connection client2server is established and we can send/write  our request to Server (it is network + application time)
  • dtime : processing Time -> amount of  time relate to (time to write) + (time for server elaboration) + (time to read answer). Here isn’t considered ctime (dtime = ttime – ctime). dtime is  basically Network + Application Time
  • ttime : total time spent on request connection and elaboration (ttime = ctime + dtime)
  • wait : waiting Time -> elapsed time that ab waits after request was completely sent to server before server sends a response (so ab starts to read). This time is time of elaboration on server side.

Screen Shot 2013 05 11 at 1.48.07 PM 300x215 HTTP benchmarking tool (Apache   ab tool) network by example

Generally i prefer to plot wait and ttime values.

Example : use file with gnuplot

giovanni$ gnuplot

G N U P L O T
Version 4.2 patchlevel 5
last modified Mar 2009
System: Darwin 12.3.0 

gnuplot>

you can use this commands with gnuplot prompt :

# If you want to save on disk png … or set "export GNUTERM=x11" for X11 screen window
gnuplot> set terminal png
gnuplot> set output "result.plo.png"
gnuplot> set title "Benchmark1"
gnuplot> set size 1,0.7
gnuplot> set xlabel 'requests'
gnuplot> set ylabel 'ms'

# if you want to use autoscale use “set autoscale xy
gnuplot> set xrange [0:10000]
gnuplot> set yrange [-100:100]

#example plot all file values
gnuplot> plot “/tmp/result.plot” using 10 smooth sbezier with lines title “wait”, “/tmp/result.plot” using 9 smooth sbezier with lines title “ttime”,  “/tmp/result.plot” using 8 smooth sbezier with lines title “dtime”, “/tmp/result.plot” using 7 smooth sbezier with lines title “ctime”

#example plot col related to wait
gnuplot> plot “/tmp/result.plot” using 10 smooth sbezier with lines title “wait”

#example plot two benchmarks results
gnuplot> plot "/tmp/result.plot" using 10 smooth sbezier with lines title "wait.1",  "/tmp/result.plot.2" using 10 smooth sbezier with lines title "wait.2"

 

result.plo  HTTP benchmarking tool (Apache   ab tool) network by example

The post HTTP benchmarking tool (Apache – ab tool) appeared first on Busylog.net.


Viewing all articles
Browse latest Browse all 19

Trending Articles