Vanilla Forum Software

I’m thinking about getting some forums going for UrbanPug.com, and I’m thinking about using Vanilla. It looks pretty good, and it’s about due for a 1.0 release.  It looks to have a promising community along with plugin support.

Serving Javascript and Images — Fast

Recently, I posted an article mainly about optimizing apache for low memory usage.  There, I noted that webservers like thttpd and lighttpd are really good at serving things fast.  I’ve been trying to optimize a site I’m playing with, and I’ve done a bit of analysis / work on using an alternative webserver.

Lighttpd wasn’t immediately in my debian apt-get list, so I went with thttpd.

The site I’m playing with has lots of images, so I took my own advice and deployed thttpd to serve up the images, and while I was at it, I moved all css and javascript over, too.  I’m using Scriptaculous, which requires the download of a large amount of javascript.

My thoughts on implementing this are first: thttpd serves up the images a LOT faster, using almost no ram.  Top doesn’t even notice that it’s running.
Second, not clogging the apache processes with images frees more apache processes for serving users.

Third, thttpd doesn’t support output compression, so I moved the javascript files back to apache, where they can be compressed with mod_deflate.  Lighttpd *does* support output compression, php, url rewriting, virtual hosts, and pretty much everything else i’d want to have.  It really looks like an amazing product, and I’m going to have to give it a try to see if lives up to the hype i’m giving it.

Oh, and in the end, I got initial page load (across the internet, cache cleared) from just over 8 seconds to 3 seconds using mod_deflate for the javascript and thttpd for the images.  Successive page loads are about 1-2 seconds.

Optimizing MySQL and Apache for Low Memory Usage, Part 1

MySQL and Apache can consume quite a bit of memory, if you’re not careful. This post discusses how to reduce the amount of memory they use without killing performance. The caveat, of course, is that you’re not going to be able to run a site with a large database and large amount of traffic with these settings. I’m going to try to explain the WHY more than the WHAT. All of this is in conjunction with my goal of reducing the amount of ram I use on my Xen based virtual server, as discussed previously in, Low Memory Computing.
Before I begin, I’d like to say that you should also look at various system utilities that consume ram. Services like FTP and SMTP can and should be passed off to xinetd. Also, you should look at shells besides bash, such as dash. And, if you’re really serious about low memory, you might look at using something like BusyBox, which brings you into the realm of real embedded systems. Personally, I just want to get as much as I can out of a standard linux distribution. If I need more horsepower, I want to be able to move to bigger, faster virtual machines and/or dedicated servers. For now, optimizing a small virtual machine will do.
First off, Apache. My first statement is, if you can avoid it, try to. Lighttpd and thttpd are both very good no frills webservers, and you can run lighttpd with PHP. Even if you’re running a high volume site, you can seriously gain some performance by passing off static content (images and javascript files, usually) to a lightweight, super-fast HTTPd server such as Lighttpd.

The biggest problem with Apache is the amount of ram is uses. I’ll discuss the following techniques for speeding up Apache and lowering the ram used.

  • Loading Fewer Modules
  • Handle Fewer Simultaneous Requests
  • Recycle Apache Processes
  • Use KeepAlives, but not for too long
  • Lower your timeout
  • Log less
  • Don’t Resolve Hostnames
  • Don’t use .htaccess

Loading Fewer Modules

First things first, get rid of unnecessary modules. Look through your config files and see what modules you might be loading. Are you using CGI? Perl? If you’re not using modules, by all means, don’t load them. That will save you some ram, but the BIGGEST impact is in how Apache handles multiple requests.

Handle Fewer Simultaneous Requests

The more processes apache is allowed to run, the more simultaneous requests it can serve. As you increase that number, you increase the amount of ram that apache will take. Looking at TOP would suggest that each apache process takes up quite a bit of ram. However, there are a lot of shared libraries being used, so you can run some processes, you just can’t run a lot. With Debian 3.1 and Apache2, the following lines are the default:

StartServers 5
MinSpareServers 5
MaxSpareServers 10
MaxClients 20
MaxRequestsPerChild 0

I haven’t found documentation on this, but prefork.c seems to be the module that’s loaded to handle things w/ Apache2 and Debian 3.1. Other mechanisms could or could not be much more memory efficient, but I’m not digging that deep, yet. I’d like to know more, though, so post a comment and let me know. Anyway, the settings that have worked for me are:

StartServers 1
MinSpareServers 1
MaxSpareServers 5
MaxClients 5
MaxRequestsPerChild 300

What I’m basically saying is, “set the maximum amount of requests that this server can handle at any one time to 5.” This is pretty low, and I wouldn’t try to do this on a high volume server. However, there is something you can and should do on your webservers to get the most out of them, whether you’re going for low memory or not. That is tweak the keepalive timeout.

Recycle Apache Processes

If you noticed, I changed the MaxRequestsPerChild variable to 500, from 0. This variable tells Apache how many requests a given child process can handle before it should be killed. You want to kill processes, because different page requests will allocate more memory. If a script allocates a lot of memory, the Apache process under which it runs will allocate that memory, and it won’t let it go. If you’re bumping up against the memory limit of your system, this could cause you to have unnecessary swapping. Different people use different settings here. How to set this is probably a function of the traffic you receive and the nature of your site. Use your brain on this one.

Use KeepAlives, but not for too long

Keepalives are a way to have a persistent connection between a browser and a server. Originally, HTTP was envisioned as being “stateless.” Prior to keepalive, every image, javascript, frame, etc. on your pages had to be requested using a separate connection to the server. When keepalives came into wide use with HTTP/1.1, web browsers were able to keep a connection to a server open, in order to transfer multiple files across that same connection. Fewer connections, less overhead, more performance. There’s one thing wrong, though. Apache, by default, keeps the connections open for a bit too long. The default seems to be 15 seconds, but you can get by easily with 2 or 3 seconds.

This is saying, “when a browser stops requesting files, wait for X seconds before terminating the connection.” If you’re on a decent connection, 3 seconds is more than enough time to wait for the browser to make additional requests. The only reason I can think of for setting a higher KeepAliveTimeout is to keep a connection open for the NEXT page request. That is, user downloads page, renders completely, clicks another link. A timeout of 15 would be appropriate for a site that has people clicking from page to page, very often. If you’re running a low volume site where people click, read, click, etc., you probably don’t have this. You’re essentially taking 1 or more apache processes and saying, “for the next 15 seconds, don’t listen to anyone but this one guy, who may or may not actually ask for anything.” The server is optimizing one case at the expense of all the other people who are hopefully hitting your site.

Lower Your Timeout

Also, just in case, since you’re limiting the number of processes, you don’t want one to be “stuck” timing out for too long, so i suggest you lower your “normal” Timeout variable as well.

Log Less

If you’re trying to maximize performance, you can definitely log less. Modules such as Mod_Rewrite will log debugging info. If you don’t need the debugging info, get rid of it. The Rewrite log is set with the RewriteLog command. Also, if you don’t care about looking at certain statistics, you can choose to not log certain things, like the User-Agent or the Http-Referer. I like seeing those things, but it’s up to you.
Don’t Resolve Hostnames

This one’s easy. Don’t do reverse lookups inside Apache. I can’t think of a good reason to do it. Any self respecting log parser can do this offline, in the background.

HostnameLookups Off

Don’t Use .htaccess

You’ve probably seen the AllowOverride None command. This says, “don’t look for .htaccess files” Using .htaccess will cause Apache to 1) look for files frequently and 2) parse the .htaccess file for each request. If you need per-directory changes, make the changes inside your main Apache configuration file, not in .htaccess.
Well, that’s it for Part 1, I’ll be back soon with Part 2, where I’ll talk about MySQL optimization & possibly a few other things that crop up.

Credits:

I’d like to give credit to a few articles that were helpful in putting this information together.  I’m not a master at this, I’m just trying to compile lots of bits and pieces together into one place.  Thanks go to:

Is Slashdot Irrelevant? (Digg vs. Slashdot)

I’ve been a podcast listener for quite a while now.  Not since the beginning, but pretty darn close.  In the past several months, I’ve been listening to Diggnation pretty religiously, and finally, a couple weeks ago, I’ve started to actually use Digg.  For those of you who don’t already know, Digg.com is a social tech news website, much like Slashdot.  However, with Digg, all stories are submitted by users and all stories are voted on and promoted to the “front page” of Digg by the users.  This causes two things to happen.  First, there’s a larger volume of stories.  Secondly, there’s a significant lag time associated with Slashdot postings.

Because every post on Slashdot is approved by an editor, it’s got to be submitted, reviewed, etc., before it can go onto the front page.  Slashdot believes that there should be an editor.  That’s fine.  However, one of the reasons I loved Slashdot was because it was one of the places I could go and see news days before the mainstream media picked up certain stories.  I felt, “In the know” by using the site.  Recently, well, Slashdot is getting scooped by Digg pretty much constantly.

Since I’ve been reading Digg, I continually have the sensation of “not finding anything new” on slashdot.  Well, at least not anything new that is *interesting* to me.  What does this tell me?  The crowd at Digg.com is pretty damn good at picking out stories that are interesting to me.  It also tells me that while both sites are pretty much “covering all the bases,”  Digg’s userbase finds things faster and promotes them faster, giving me more timely news.

In addition, with Digg, I think there are fewer dupes, or duplicate postings.  The editors of slashdot are almost infamous for posting things that they’ve already posted.  I dont know whether this is because they lack editorial communication or are just forgetful, but it happens an awful lot.  With Digg, the astute users almost always notice and don’t promote duplicate postings.  There are even built in mechanisms for finding duplicates. (Users can mark stories as duplicates, and when you submit a story, Digg searches its database to show you similar articles, helping you make sure you’re not posting a dupe.)

The one salvation for slashdot is, for better or for worse, its community.  It’s been recently reported that Digg has more pageviews than slashdot, but I think slashdot has a much higher number of comments per post, sparking more discussion.

In conclusion, if I want my news faster, I go to Digg.  If I want a second opinion or sanity check for a piece of news, I wait for it to show up on Slashdot.

Note: I do not advocate forming opinions solely based on that of Slashdot readership. That would be silly.

Star Trek Cribs

This has got to be one of the funniest commercials i’ve seen in a while.

I’m not sure exactly what G4 is doing with “star trek 2.0″ but the commercials they’re doing are pretty damn good.