Canadian Web Hosting Blog and News
17Oct/110

How to Tweak Apache for Maximum Load

Different types of tweaks should be done depending on the server instead of conducting the exact same tweaks on different servers - even though the content, number of visitors, websites and server resources are different.

Let's take a look at a recent example and a “hypothetical” server configuration that was set to the incorrect settings and the steps taken by a technician to improve performance on the server.

1. Testing/pinging the server: Probably the first step that should be completed is pinging the server and getting a better idea of speed. When testing a customer’s website you don’t want to use ping and instead use Pingdom (or any similar tool) to run a full page test.

2. Resources - Xeon Quad Core Processor and 2GB RAM:  Initially, the server was tweaked and the performance of the server was below the clients requirements.  Apache settings were set so high that when the customer did a stress test, it crashed the server.

This article will assume that the reader has a fair share of knowledge on Apache variables, and so here's how you can tweak Apache.

1. I’d start by checking the customer’s website and see what kind of code is used. Is this static content (meaning html/htm/text/images) or dynamic (PHP/javascript)? In this example, we will say that most of the content is static content, it’s always better to set KeepAlive to on. What this actually does is to allow multiple requests for the same component, to be served through the same session. Being static content, it would not change from visitor to visitor and so this helps speed things up.  The server was previously set to off and we’ve changed it to on with a low value for KeepAliveTimeout being set at 3 secs. So if the same component is not requested within 3 secs, that session is closed. Leaving it open for too long would cause too much memory to be used.

2. Then using “nice top -d 2 -u nobody -c” I see the following (nobody being the user set for Apache):
-----------------------------------------------------
22511 nobody 15 0 83128 6916 3060 S 0.0 0.3 0:00.00 /usr/local/apache/bin/httpd -k restart -DSSL
22512 nobody 15 0 82996 5400 1720 S 0.0 0.3 0:00.00 /usr/local/apache/bin/httpd -k restart -DSSL
22513 nobody 15 0 83128 6916 3060 S 0.0 0.3 0:00.00 /usr/local/apache/bin/httpd -k restart -DSSL
22514 nobody 17 0 82996 5448 1756 S 0.0 0.3 0:00.00 /usr/local/apache/bin/httpd -k restart -DSSL
-----------------------------------------------------------------------------------------

Check the bold items. This tells you approximately how much RAM each of these processes take. Here it shows it’s approximately 6MB(6000KB). This tells you by how much you can increase the value for MaxClients before crashing your server. It’s similar as “Maxclients = TotalServerRAM/ApacheChildProcessRAM”

Basic math tells us MaxClients = 2048/6 = 340 (rounded off). Now since the memory used varies between each process, the value would range +/- 100 depending on the content. We set this to a value of 500 (I tried lower values before I reached this conclusion).  When we checked it was set to 4000. MaxClient does not indicate the number of visitors able to access the website. It tells you how many child processes Apache can generate. Since each of these processes can serve multiple requests simultaneously, you do not require that many and also you need to consider RAM limitations on the server.

3. Now MaxRequestsPerChild tells you how many requests can be handled by a single Apache process, before a new process is created. From the above math you have like 500 child processes (MaxClients), This defines how many requests can be served by each of these 500 processes.

In our scenario, let’s say the customer wants 5,000 users accessing the website at the same time, we would set it to something like 10000. Keeping in mind that I am assuming when a visitor accesses a page, there are 1000 requests generated (for the different components that make up the site). And that would mean an end total of 5,000 users being able to visit the website simultaneously.

Number of visitors = (MaxRequestsPerChild * MaxClients)/No. of requests per visit = (10000 * 500)/1000 = 5,000 at any point of time.

Now comes the fun part. This is where you see your efforts being rewarded.

With the tweaking done, it’s now time to test the website. we’ve tried benchmarking the website using only the home page: https://abcompany.ca/index.html and CPU shot up to 180+. Our first thought was, "Did the server actually need more RAM and more processing power(CPU addition)?" Let’s see how my tweaks came out.

Then we ran the command: “ab -kc 1000 -n 10000 https://abccompany.ca/index.html”. The “ab” tool comes with all Apache installations. It’s the less known ApacheBenchmark tool. This was executed on another Apache server (made sure that these servers were on the same DC to give optimum results). You can see the results below indicating the real important parts.
-------------------------------------------------------------------------------------------------------------------------------
-c Number of simultaneous requests generated in one go (since one request for a web page generates multiple ones for each component on the page)
-k keeping keep-alive on for each of these requests
-n actual number of requests we intend to generate (-c acts as a multiplier here, but the following command does not mean 1000x10000 requests)

Basically this command sends out 1000 concurrent connections 10.000 times :

# ab -kc 1000 -n 10000 https://scotiavitality.ca/index.html
This is ApacheBench, Version 2.0.40-dev apache-2.0
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Copyright 2006 The Apache Software Foundation, http://www.apache.org/

Benchmarking scotiavitality.ca (be patient)
Completed 1000 requests
Completed 2000 requests
SSL read failed - closing connection
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
Finished 10000 requests

Server Software: Apache/2.2.19
Server Hostname: abccompany.ca
Server Port: 443
SSL/TLS Protocol: TLSv1/SSLv3,AES256-SHA,2048,256

Document Path: /index.html
Document Length: 18036 bytes

Concurrency Level: 1000
Time taken for tests: 49.870616 seconds
Complete requests: 10000
Failed requests: 2
(Connect: 0, Length: 1, Exceptions: 1)
Write errors: 0
Keep-Alive requests: 0
Total transferred: 184028760 bytes
HTML transferred: 180374732 bytes
Requests per second: 200.52 [#/sec] (mean) < - Max number of requests that actually get served by the Apache server inspite of what you set in the configuration
Time per request: 4987.062 [ms] (mean)
Time per request: 4.987 [ms] (mean, across all concurrent requests)
Transfer rate: 3603.62 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 1197 3905.6 329 41554
Processing: 15 3694 4409.2 2137 40433
Waiting: 3 3527 4425.7 1884 39983
Total: 30 4892 6093.8 2739 47458 < -You can see that some requests got done within 2.7secs while others took atleast 47.4secs

Percentage of the requests served within a certain time (ms)
50% 2739
66% 4422
75% 6140
80% 7539
90% 12164
95% 16761
98% 26288
99% 30448
100% 47458 (longest request)

Check out the load averages during the test :
root@ded [~]# w
14:02:30 up 57 days, 21:39, 3 users, load average: 187.71, 54.14, 27.28 < - 187 OMG !!
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
root tty1 - 04Aug11 58days 3:25 0.00s -bash
root pts/0 65.39.182.115 12:47 34:16 0.03s 0.03s -bash
root pts/1 65.39.182.115 13:09 0.00s 0.04s 0.02s w
-------------------------------------------------------------------------------------------------------------------------------
The server does feel really stressed out.

Now as mentioned at the start of this, I’ve checked the server systematically and now applied the tweaks. So let’s run the stress test again to see what we get:
-----------------------------------------------------------------------------------------------------------------------------------
Server Software: Apache/2.2.19
Server Hostname: abcompany.ca
Server Port: 443
SSL/TLS Protocol: TLSv1/SSLv3,AES256-SHA,2048,256

Document Path: /index.html
Document Length: 18036 bytes

Concurrency Level: 1000
Time taken for tests: 35.237605 seconds
Complete requests: 10000
Failed requests: 12
(Connect: 0, Length: 8, Exceptions: 4)
Write errors: 0
Keep-Alive requests: 0
Total transferred: 183185716 bytes
HTML transferred: 179572956 bytes
Requests per second: 283.79 [#/sec] (mean) < - Faster processing times, indirectly meaning more number to requests being served per second.
Time per request: 3523.761 [ms] (mean)
Time per request: 3.524 [ms] (mean, across all concurrent requests)
Transfer rate: 5076.74 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 1385 2887.9 614 26493
Processing: 17 1460 2653.6 559 24529
Waiting: 2 1046 2312.1 141 22985
Total: 270 2846 4002.3 1327 27068 < - You can see actual faster load times.

root@ded [/home/scotia1/www]# w
15:18:17 up 57 days, 22:55, 3 users, load average: 15.11, 7.24, 3.03
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
root tty1 - 04Aug11 58days 3:25 0.00s -bash
root pts/0 65.39.182.115 12:47 0.00s 0.04s 0.01s w
root pts/1 65.39.182.115 13:09 6:21 8.34s 8.30s htop
-----------------------------------------------------------------------------------------------------------------------------------

This incidentally means that the server is going to need more resources, but now we’ve maximized throughout with what we have in hand.

Related Posts:

Comments (0) Trackbacks (0)

No comments yet.


Leave a comment

(required)

No trackbacks yet.