Improving Server Capacity
Probably the most important question about a webserver, from the customer's point of view, is "how many users can my server handle?". While this is a complicated question, with an answer that depends on a lot of variables, there are certainly several tried and true methods of increasing the number of concurrent requests that your server can handle. This can mean better performance for users, as well as reducing your hosting costs (by reducing your hardware requirements, or delaying the need for an upgrade).
This guide is centred around web hosting, as it is the most common form of service. However, many of the concepts can be applied to other situations, as required. Note that hardware upgrades and load-balancing aren't discussed in this article; while they're a legitimate means of improving the capacity of a website, I've limited my list of suggestions to what can be implemented by a developer or sysadmin without requiring more money. If these suggestions aren't enough, though... well, then it's time to get out the chequebook (or, these days, the online banking password, I guess).
Putting Apache on a diet
The first limiting factor in webserver performance tends to be running out of memory. With a large enough number of concurrent requests, the Apache processes handling the incoming requests start to use more memory between them than the server has available. The moment this happens, you're practically dead in the water -- you'll hit swap, take longer to process each request, the requests will keep coming in, and the smoke starts pouring out of your hard drives as they melt down.
So, if you want to handle more simultaneous users, you need to either have more memory, or have each Apache process consume less memory. This latter option is surprisingly easy to achieve in many cases, and can provide good results.
First off, remove any modules that you aren't using. This takes a bit of knowledge of what modules your site is using, but a bit of experimentation in staging can answer a lot of those questions quickly. I'd like to say that the results are dramatic -- halving the size of your workers or some such, but unfortunately since all the modules you'll remove are ones you weren't using anyway, the actual RAM savings are fairly modest. However, if you've got 250 Apache processes, each consuming 50MB, and you can make them drop 1MB each, that's space for another 5 workers you've just gotten, practically for free.
If you're not using PHP or any other modules that don't get along with multithreaded code, you can switch to using the "worker" MPM; this will reduce the number of Apache children needed to serve content, which reduces your memory needs significantly.
Other Apache tuning tips
Some quick bits to tune, to improve Apache performance and utilisation of child processes:
KeepAliveTimeout 2 (or 15 if you use the worker MPM)
Optimising your application
The quicker you can complete a dynamic request, the quicker the webserver can move on to another process -- this reduces the number of Apache workers you need, and hence memory usage. This article can't provide much specific advice on this topic, and we're sysadmin, not developers, so we're not the experts anyway. The Internet shall provide.
One thing that we recommend can help, though, is to use a PHP precompiler/optimiser; these are a bit of a hassle to install, but can help in improving your PHP application performance quite a bit.
Serve your static files separately
Let browsers cache things
Use a caching reverse proxy
A reverse proxy is used to intercept requests to your application, and serve it very quickly from it's in-memory cache rather than needing to bother your webserver. Since all the proxy has to do is find a bunch of bits and throw them our the network interface, it's faster than any webserver could ever be, but it really comes into it's own if you've implemented reasonable cache control policies in your application -- it really helps the proxy to cache things appropriately.
However, if you can't do cache headers properly in your app, you can also implement some interesting policies in the proxy itself. For instance, a customer of ours who runs an online store can't predict when their cached content will need to be refreshed, so their proxy caches pages indefinitely (while telling browsers and upstream caches that the pages can't be cached at all), and they've implemented a mechanism whereby whenever the data behind a page is changed, the proxy is told to delete it's copy of the page and refetch it from the server. Sure, it's a bit more work, but it saved them having to buy another server for 12 months, which isn't bad for two hours work.
There's a number of things you can do to improve the capacity of your server, from configuring apache differently to serving your static files separately and using a frontend proxy to reduce the number of HTTP requests that need to be handled by the webserver itself. If you're an Anchor customer, feel free to raise a support ticket if you would like advice on whether any of these suggestions would be appropriate for your site, or if you need help implementing these on your Anchor dedicated server or VPS.
Hunting The Performance Wumpus -- a detailed guide to tracking down the source of performance problems in your server.