Canadian Web Hosting Blog and News
28Oct/110

Social Media Club Vancouver (@SMCYVR) Pub Night Recap

Last Wednesday evening, I attended the Social Media Club Vancouver Pub Night event, a casual and informal mixer with other local social media enthusiasts from all different industries including software development, business consulting, real estate, internet services (web hosting!), social gaming, health services, graphic design and more. About 20 people showed up and we were able to enjoy great and engaging conversations surrounding anything social media including events nearby, Foursquare, Twitter best practices and of course, we all tweeted live using the hashtag #SMCYVRPub that night.

The event itself was hosted at the New Oxford Pub in Yaletown. A casual, yet chic pub where they serve you one pound of wings for only $4 on Wednesdays - what a deal! They even thanked some of the attendees by tweeting us live during the event.

You can follow both organizations respectively at @SMCYVR and @DonnellyPubs. If you are interested in finding out who runs Social Media Club Vancouver behind the scene, you can get familiar with some friendly faces on their about section. In attendance that night, I had the chance to meet the following leaders: Kemp Edmonds (President), Cathy Browne (Advisor), Stephanie Michelle Scott (Chat Enginneer), Yuri Artibise (Blog Boss) and Gus Fosarolli (Advisor) whom left before I had the chance to say hi to. Stephanie mentioned that if you’re interested in being involved with this club, you can expect an event to happen every third Wednesday of the month.


If you are at all interested in social media and you live around Vancouver, you definitely should check out the upcoming events. It's fun and the people are great. Once again, we emphasize community and this is definitely a community that you should get involved with.

Kevin Liang
CTO / SEO Guru

24Oct/110

Over 20 Everyday Linux Commands

Linux administrators know that they cannot live through the GUI alone and require strong expertise in the command line.  Because of that, there isn’t a day that goes by, that clients don’t ask us for common tips and useful commands to help them better manage their servers.  This article is meant to give everyone a few tips and tricks to manage their servers through the command line.   Working with the command line can be very fast, efficient and infinitely flexible if you know how to navigate your way around.

Over 20 Everyday Linux Commands



uname -a: Shows kernel version and system architecture.

df -h: Reports information about space on file systems in human readable format.

free -m: Shows memory usage.

w, who: list all users currently logged into the system.

who’ only shows the IP address and terminal they are using,

w’ also shows current system time, uptime, load average and the current command that user is running.

Tip: who --boot, shows time of last system boot

last:  generate a listing of last user logins for a period.

"last reboot" : Shows system reboot history.

chmod: set permissions of files or directories.

chown: set ownership of files or directories.

tail: displays the last part of a file

'-f' can be used to monitor messages in log files, tail -f /var/log/exim_mainlog : monitor maillog in cPanel servers

head: displays first part of a file.

pwd: displays current working directory

cd: used to change to another directory

mv: move a file/directory to a different location, also can be used to rename a file/directory.

alias: can be used to create a new command from a specified command list.

There is no direct command in linux to list only the sub-directories in a directory, we can create an alias using the following command:

alias dir=”ls -al | grep ^d”

also, can be used create an alias for frequently used commands.

alias ll=”ls -al”

netstat: Helps monitoring connections to the server

lsof:  Find the files/binaries being used by a process and the process’s working directory

find: Helps you list out files and folders based on a variety of search options.

dig:  Has one option that always you to test the DNS for a domain, even if it was just changed

Ls:   List directory content

Top:  Show current process

Grep:  Find command to find names in files

Ps:  process status

18Oct/110

Five Tips for New Small Business Owners

This morning, I stopped by the event, Get Your Business Online, to check it out with other small business owners and interested parties. The event was held all morning at one of the conference rooms at the beautiful Fairmount Hotel in Vancouver. There was a nice spread of pastries including croissants, muffins, coffee and tea available. We had a packed house with a couple of hundreds of attendees from the area.

Here are some basic takeaways for small business owners to grasp:

1. Your customers are online
This is the main factor why you need to have a strong presence online. Per the title of the event, the sponsors are urging all small business owners to start creating and owning a presence online because customers are becoming more and more savvy in comparing products and services online before making purchases. Without a strong online presence, you might miss out on important opportunities and your competitors with a more strategic online base will win in the long run.

2. Think mobile
According to their stats, in less than 5 years, more than 50% of Canadians will be using mobile phones. As you are building your sites and online presence, you’ll have to keep in mind that your online sites will need to be optimized for smartphones and/or tablets. It’ll make it much easier for customers to pull up your pages when they’re on the go. When a site is not optimized, the user has to zoom in which it takes times but always takes away from the user experience sometimes causing frustration.

3. Don’t overlook geo-location tools
Depending on your service, you’ll need to consider having an online presence based on locations. For example, if you’re a florist who deliver within a certain amount of kilometres of a certain location, you might want to list yourself in different online directories like Google Places or Yellow Pages so that when customers search for you, you’ll pull up on their searches.

4. Use effective online ads for advertising
One of the ways to reach your market is to buy online ads. When crafting an ad, you want to make sure that you’re specific and that you ask for a call-of-action. The speaker, a business analyst lead from Google, mentioned that way too many times, he’d see his own customers writing poor ads that would lead to very poor results. Being concise, catchy but also specific enough are important components to use when looking at online advertising.

5. Get social
Along with owning your site as a virtual store front, don’t underestimate the power of social media. Twitter and Facebook are free and there are powerful ways to reach out to your own communities. You can listen to conversations happening all around you about your own products or services. You could use social media as a more interactive and new way to connect with your current or prospective clientele. You can do many things from promoting special deals, locating new clients, building new relationships by making their shopping experience more human.

These are some ideas to get you started and as you’re starting or expanding your online presence, you can always reach out to us to reserve domains or to host your sites by leaving us comments anytime! You can visit us at Canadian Web Hosting to get started as well.

Kevin Liang
CTO / SEO Guru

17Oct/110

How to Tweak Apache for Maximum Load

Different types of tweaks should be done depending on the server instead of conducting the exact same tweaks on different servers - even though the content, number of visitors, websites and server resources are different.

Let's take a look at a recent example and a “hypothetical” server configuration that was set to the incorrect settings and the steps taken by a technician to improve performance on the server.

1. Testing/pinging the server: Probably the first step that should be completed is pinging the server and getting a better idea of speed. When testing a customer’s website you don’t want to use ping and instead use Pingdom (or any similar tool) to run a full page test.

2. Resources - Xeon Quad Core Processor and 2GB RAM:  Initially, the server was tweaked and the performance of the server was below the clients requirements.  Apache settings were set so high that when the customer did a stress test, it crashed the server.

This article will assume that the reader has a fair share of knowledge on Apache variables, and so here's how you can tweak Apache.

1. I’d start by checking the customer’s website and see what kind of code is used. Is this static content (meaning html/htm/text/images) or dynamic (PHP/javascript)? In this example, we will say that most of the content is static content, it’s always better to set KeepAlive to on. What this actually does is to allow multiple requests for the same component, to be served through the same session. Being static content, it would not change from visitor to visitor and so this helps speed things up.  The server was previously set to off and we’ve changed it to on with a low value for KeepAliveTimeout being set at 3 secs. So if the same component is not requested within 3 secs, that session is closed. Leaving it open for too long would cause too much memory to be used.

2. Then using “nice top -d 2 -u nobody -c” I see the following (nobody being the user set for Apache):
-----------------------------------------------------
22511 nobody 15 0 83128 6916 3060 S 0.0 0.3 0:00.00 /usr/local/apache/bin/httpd -k restart -DSSL
22512 nobody 15 0 82996 5400 1720 S 0.0 0.3 0:00.00 /usr/local/apache/bin/httpd -k restart -DSSL
22513 nobody 15 0 83128 6916 3060 S 0.0 0.3 0:00.00 /usr/local/apache/bin/httpd -k restart -DSSL
22514 nobody 17 0 82996 5448 1756 S 0.0 0.3 0:00.00 /usr/local/apache/bin/httpd -k restart -DSSL
-----------------------------------------------------------------------------------------

Check the bold items. This tells you approximately how much RAM each of these processes take. Here it shows it’s approximately 6MB(6000KB). This tells you by how much you can increase the value for MaxClients before crashing your server. It’s similar as “Maxclients = TotalServerRAM/ApacheChildProcessRAM”

Basic math tells us MaxClients = 2048/6 = 340 (rounded off). Now since the memory used varies between each process, the value would range +/- 100 depending on the content. We set this to a value of 500 (I tried lower values before I reached this conclusion).  When we checked it was set to 4000. MaxClient does not indicate the number of visitors able to access the website. It tells you how many child processes Apache can generate. Since each of these processes can serve multiple requests simultaneously, you do not require that many and also you need to consider RAM limitations on the server.

3. Now MaxRequestsPerChild tells you how many requests can be handled by a single Apache process, before a new process is created. From the above math you have like 500 child processes (MaxClients), This defines how many requests can be served by each of these 500 processes.

In our scenario, let’s say the customer wants 5,000 users accessing the website at the same time, we would set it to something like 10000. Keeping in mind that I am assuming when a visitor accesses a page, there are 1000 requests generated (for the different components that make up the site). And that would mean an end total of 5,000 users being able to visit the website simultaneously.

Number of visitors = (MaxRequestsPerChild * MaxClients)/No. of requests per visit = (10000 * 500)/1000 = 5,000 at any point of time.

Now comes the fun part. This is where you see your efforts being rewarded.

With the tweaking done, it’s now time to test the website. we’ve tried benchmarking the website using only the home page: https://abcompany.ca/index.html and CPU shot up to 180+. Our first thought was, "Did the server actually need more RAM and more processing power(CPU addition)?" Let’s see how my tweaks came out.

Then we ran the command: “ab -kc 1000 -n 10000 https://abccompany.ca/index.html”. The “ab” tool comes with all Apache installations. It’s the less known ApacheBenchmark tool. This was executed on another Apache server (made sure that these servers were on the same DC to give optimum results). You can see the results below indicating the real important parts.
-------------------------------------------------------------------------------------------------------------------------------
-c Number of simultaneous requests generated in one go (since one request for a web page generates multiple ones for each component on the page)
-k keeping keep-alive on for each of these requests
-n actual number of requests we intend to generate (-c acts as a multiplier here, but the following command does not mean 1000x10000 requests)

Basically this command sends out 1000 concurrent connections 10.000 times :

# ab -kc 1000 -n 10000 https://scotiavitality.ca/index.html
This is ApacheBench, Version 2.0.40-dev apache-2.0
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Copyright 2006 The Apache Software Foundation, http://www.apache.org/

Benchmarking scotiavitality.ca (be patient)
Completed 1000 requests
Completed 2000 requests
SSL read failed - closing connection
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
Finished 10000 requests

Server Software: Apache/2.2.19
Server Hostname: abccompany.ca
Server Port: 443
SSL/TLS Protocol: TLSv1/SSLv3,AES256-SHA,2048,256

Document Path: /index.html
Document Length: 18036 bytes

Concurrency Level: 1000
Time taken for tests: 49.870616 seconds
Complete requests: 10000
Failed requests: 2
(Connect: 0, Length: 1, Exceptions: 1)
Write errors: 0
Keep-Alive requests: 0
Total transferred: 184028760 bytes
HTML transferred: 180374732 bytes
Requests per second: 200.52 [#/sec] (mean) < - Max number of requests that actually get served by the Apache server inspite of what you set in the configuration
Time per request: 4987.062 [ms] (mean)
Time per request: 4.987 [ms] (mean, across all concurrent requests)
Transfer rate: 3603.62 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 1197 3905.6 329 41554
Processing: 15 3694 4409.2 2137 40433
Waiting: 3 3527 4425.7 1884 39983
Total: 30 4892 6093.8 2739 47458 < -You can see that some requests got done within 2.7secs while others took atleast 47.4secs

Percentage of the requests served within a certain time (ms)
50% 2739
66% 4422
75% 6140
80% 7539
90% 12164
95% 16761
98% 26288
99% 30448
100% 47458 (longest request)

Check out the load averages during the test :
root@ded [~]# w
14:02:30 up 57 days, 21:39, 3 users, load average: 187.71, 54.14, 27.28 < - 187 OMG !!
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
root tty1 - 04Aug11 58days 3:25 0.00s -bash
root pts/0 65.39.182.115 12:47 34:16 0.03s 0.03s -bash
root pts/1 65.39.182.115 13:09 0.00s 0.04s 0.02s w
-------------------------------------------------------------------------------------------------------------------------------
The server does feel really stressed out.

Now as mentioned at the start of this, I’ve checked the server systematically and now applied the tweaks. So let’s run the stress test again to see what we get:
-----------------------------------------------------------------------------------------------------------------------------------
Server Software: Apache/2.2.19
Server Hostname: abcompany.ca
Server Port: 443
SSL/TLS Protocol: TLSv1/SSLv3,AES256-SHA,2048,256

Document Path: /index.html
Document Length: 18036 bytes

Concurrency Level: 1000
Time taken for tests: 35.237605 seconds
Complete requests: 10000
Failed requests: 12
(Connect: 0, Length: 8, Exceptions: 4)
Write errors: 0
Keep-Alive requests: 0
Total transferred: 183185716 bytes
HTML transferred: 179572956 bytes
Requests per second: 283.79 [#/sec] (mean) < - Faster processing times, indirectly meaning more number to requests being served per second.
Time per request: 3523.761 [ms] (mean)
Time per request: 3.524 [ms] (mean, across all concurrent requests)
Transfer rate: 5076.74 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 1385 2887.9 614 26493
Processing: 17 1460 2653.6 559 24529
Waiting: 2 1046 2312.1 141 22985
Total: 270 2846 4002.3 1327 27068 < - You can see actual faster load times.

root@ded [/home/scotia1/www]# w
15:18:17 up 57 days, 22:55, 3 users, load average: 15.11, 7.24, 3.03
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
root tty1 - 04Aug11 58days 3:25 0.00s -bash
root pts/0 65.39.182.115 12:47 0.00s 0.04s 0.01s w
root pts/1 65.39.182.115 13:09 6:21 8.34s 8.30s htop
-----------------------------------------------------------------------------------------------------------------------------------

This incidentally means that the server is going to need more resources, but now we’ve maximized throughout with what we have in hand.

11Oct/110

The Performance Bottleneck – What Do I Do?

Fine tuning a server and a website for maximum performance is not as easy as one might assume. Each day, we get questions from our web hosting customers on why their server seems to be a bit slow, or website load times don’t seem to be optimized. More importantly, when we tell customers it is time to upgrade because their resources are maxed out, they usually looked to us and ask why they need to upgrade. The purpose behind this article is to overview some of the common trouble areas to improve server performance, and to also take a look at some other potential (and less common) areas where problems can hide. Looking at these together, one can usually identify where the performance breakdown is occurring and make a better judgment on needs to be done to improve their performance.

Memory
When in doubt, add more memory. This is a common response when a customer’s server memory allocation is being completely utilized, but is this always the right answer? A lot of times when we talk to our web hosting customers about this, we need to take a deeper look at their application. Oftentimes memory issues are actually the result of a separate issue, i.e. memory leaks from poorly designed software or system flaws that are manifesting themselves as “memory” errors. This is something we actually saw with an enterprise location that we were utilizing that oftentimes would require more memory when it was clear that more memory was not the issue. So, while adding more RAM is a solution, we also need to look at the root cause of the symptom to ensure that the added expense is needed.

CPU
Wikipedia describes the CPU as the “portion of a computer system that carries out the instructions of a computer program, and its role is somewhat analogous to the brain.” While server processors like Intel’s Xeon chips are calculating an amazing amount of instructions per second, there is still a physical limit that can cause a performance issue when the operations being processed exceed the capacity. As an example, when the CPU is operating at greater than 75%, the entire system will slow down. The reason for this is that the CPU needs the ability to “burst” where the processing load will reach 100% for short periods of time.

Storage
Disk I/O, what is it? Servers come vast storage capabilities and potential configurations to meet different type of server requirements, for example database servers and application servers. Because of this, when a server is built several different storage factors need to be accounted or. Disk speed, RAID type, storage type and controller technology all play a significant role in what is known as Disk I/O. Regardless of the combinations, there are physical limits on how much data can be put through the server even when using top of the line components. Because of this, it is important that we work with our web hosting customers to design the storage capabilities around the function of the server. Using my example above, a database server is going to need significantly more Disk I/O when compared to a web application server and because of this we would modify the RAID and drive types to improve that parameter.

Network
We oftentimes get questions about whether there are network issues or other problems relating to the network that may be causing customer site issues. More than likely the answer is no and here is why. The first reason being is that our network delivers a 100% service level meaning that it is up and running 100% of the time or you get your money back. Outside of that, the other issues that could be potentially causing problems are bad switch ports, bad cables, router configuration issues or a network card that needs replacing. The occurrence of one of these happens is less then once a year.

Malware
What does Malware have to do with your server? Just like a desktop or laptop, viruses, and spyware can create a significant reduction in your servers performance by using your available resources to do things that in most cases you are not even aware of. To help customers with that, we deploy significant resources to combat malware including regular scans and code updates. In addition, we are deploying a new service called “Stop the Hacker” that utilizes new technologies and helps end users safeguard their servers and enhance the security, health status and reputation of the end users site.

Applications
Try as I might, one area that our customers don’t want to hear about is applications. Usually when I mention that the performance issues they are seeing might be related they often tell me that, “no, it’s worked perfectly in the past. It’s your server.” Oftentimes the performance issues lie within the application code itself. Oftentimes developers don’t take the time to structure the application for great performance and do not optimize the code to run on the web. Nine times out of ten, the only way to fix this is to get somebody into the code and make the updates that are required. We maintain an internal development that works full time updating and optimizing the code behind our customers applications, but there can be less expensive alternatives like looking for open source alternatives, implementing a proof of concept before deploying a production site, or asking your host for a test server to try your application. If you are interested in learning more about this topic, there is a great article I found (though a bit technical) that talks about some of the things that be done to improve your applications performance.