Tuning PostgreSQL on your Dedicated Server

PostgreSQL is a widely used open-source SQL database server. In contrast to the other popular option, MySQL, it is generally considered to be more featureful and robust, at the expense of greater configuration complexity and slightly lower performance. At Anchor, we prefer PostgreSQL.

In it's default configuration, PostgreSQL is certainly not the fastest database on modern hardware, however this only because it's default configuration is designed to "work everywhere" -- which can also be taken to mean "doesn't work real fast anywhere".

Thankfully, it only takes a few minutes of tweaking to make your PostgreSQL server perform like a champion.

This guide will refer mostly to tuning a database for a large-scale environment on a dedicated server, but the same principles apply on a more lightly-loaded system, it's just that the tuning parameters aren't usually quite so critical in less heavily used environments. If you want your PostgreSQL server to run faster, regardless of the query load it's servicing, this guide is for you. If you're an Anchor shared hosting customer, we've already done the hard yards of tuning the database for you, and all you should need to do is tune your database queries for maximum performance.

Before You Start

Before you start down the path to running queries at the speed of dark, there's a few things to keep in mind.

Firstly, it's important to know your database's workload -- is it very read-heavy, with very few writes, or are there regular writes going on? Remember to take into account "hidden" writes that might be going on behind your back. Keeping logs in the database, or session data, for instance, can often be overlooked when considering the workload of your database, but a couple of write queries with every request can turn a read-heavy workload into a write-heavy workload very quickly.

In related news, knowing how much disk space your database is taking up is a useful metric, and trivial to obtain -- a du -s on the data/base directory of your pgsql installation will do the trick. Although it doesn't have a direct bearing on any of the tuning parameters, it can give you some idea of how much physical hardware you'll need.

The other major issue that impacts database performance is sharing the physical hardware with some other task. While multitasking is a wonderful thing, there are limits to how hard you can push your physical hardware. Sharing the available resources with a web server or some other system reduces the amount of resources you have available to your database server, and you should at least be aware of how much of the system's resources are consumed by those other services when tuning your database server, lest you overcommit your resources and everything goes down the toilet at the speed of swap.

The Right Hardware

A database is possibly one of the heaviest users of hardware commonly deployed. Lots of memory is important, but more than that, especially for write-heavy databases, is fast disks. Money spent on good quality, high-speed disks (and associated optimisations, like battery-backed write caches and solid-state disks) on a heavily-used database can have a dramatic impact on the performance and capacity of any database server.

If you're an Anchor customer, let us know if you're going to be running a database when you place your order, and we'll work with you to make sure your machine is properly specced to handle your needs.


The easiest answer to the question "how much memory do I need?" is "how much can you afford?" or "how much can your motherboard support?". With enough memory, your database will be entirely cached eventually, which makes for pretty good performance.

At a minimum, I'd recommend having memory equal to the amount of space your database occupies on disk, with some extra for future growth. Remember that if you've got other services on the machine, you'll need to allow enough memory for them as well as the memory you budget for the database server. The benefits of more memory decrease past that point, so buying more RAM has a reduced cost/benefit. However, your database is almost certain to grow over time, so don't underspec. You'll probably be planning on keeping this machine for several years; you don't want to have to take a maintenance in a year's time to add more RAM because you under-ordered at the beginning.

Hard Drives

If you have a database that has any sort of reasonable write load, or that stores too much data to fit into memory, you need smokingly fast disks. More than any other element of your hardware selection, disk selection will determine whether your database is a greyhound or a chihuahua.

15k RPM SAS disks are where it's at these days. High speed disk controllers attached to fast data buses are a must, too. If you've got any sort of write load, consider a battery-backed write cache, which boosts performance by allowing writes to "complete" safely before they're actually on disk. If you have a need for even better write performance, then consider putting the write-ahead logs (a journal of all the changes that is written to immediately, before the write is fully written to the "main" database storage) onto a separate disk set -- possibly solid-state disks, or one of those neat battery-backed RAM disks. Note, however, that your write-ahead log must be persistent in the event of a crash -- it is not cached data that can be lost without incident. A corrupted or missing write-ahead log is as bad as corrupting the main database files themselves.

Because fast disk storage is crucial to many areas of server performance, we've got a whole article dedicated to getting the most out of your disks.


In all honesty, having the fastest possible CPUs as opposed to any run-of-the-mill modern CPU will make little-to-no difference to the performance of your system. PostgreSQL works very well with multiple CPUs or multi-core CPUs, so picking those over single-core units is handy (although finding single-core CPUs in server-grade chassis these days is getting pretty hard).

One place where faster CPUs might help is if you're doing a lot of in-database calculations in your queries when you've got a relatively small database; in that case, the CPU costs of your queries start to dominate over the IO, and a faster CPU might make a difference. This is a relatively rare case, though, so if you're tossing up between the fastest possible CPU and the fastest possible disks, the safe bet is on getting the cheaper CPU and the better disks.

System Configuration

Operating System

On any modern Linux distribution, your system will be fairly well tuned for whatever workload you want to throw at it. Tuning your filesystems for the best possible IO performance is probably the best thing to do here, but really the benefits are fairly small when compared against some of the other things that can be done.


This where most of your initial work should be done -- telling PostgreSQL how best to use the system it's on. See, when you design a default configuration, you want something that is going to "work" on the largest selection of systems, and that goes directly counter to the idea of making things run fast. Configuring PostgreSQL to use a lot of system resources will cause it to run horribly slowly -- or even not at all -- on resource-constrained systems. Hence, the default configuration of PostgreSQL really does run poorly on any reasonably-sized server system.

There are a large number of parameters available for PostgreSQL, many of them quite arcane. This is one of the reasons why it has gotten a reputation for being complicated, but thankfully most of them can be ignored for basic tuning purposes. In fact, many of the options are quite optimal by default, and you should never need to change them except in extreme circumstances.

However, there are a few options that definitely do need adjusting, and these are what we're going to be focusing on. Each of the sections below talks about one parameter, what it should be set to, and what it does for you.

A note on "available memory"

Several of the parameters we want to adjust are calculated in terms of "available memory". If the machine is dedicated to running the database, then available memory will be the physical memory of the machine. However, if you're sharing the machine with other services like a web server or active mail server, you should subtract the amount of memory that these services use from the physical memory of the system to get your available memory figure.


  • Units: 8kB pages
  • Default: sweet bugger all
  • Recommended: 0.25 * Available Memory

PostgreSQL works by having a separate process for each incoming database connection. While this makes coding the thing a bit easier, it does mean that there's no default "shared memory" space like you get if you've got threads. So, PostgreSQL instead has a large "shared memory pool" that all of the server processes hook up to to keep cached data in. It is this pool of memory that provides a lot of the performance gain in your system; if it's too small, then the "working set" of data that is frequently reaccessed can't all be in there at once, and the system will spend a lot of it's time re-reading stuff off disk and recalculating things.

Since working out exactly how big your working set might be isn't practical, and the more stuff you cache the more queries will run fast anyway, we just calculate the value of shared_buffers against available memory -- if you've got your available memory figure in megabytes, just multiply it by 32 to get the number you want to set for shared_buffers.

NOTE: There is a kernel parameter that needs to be adjusted to allow large values of shared_buffers to work, since the kernel puts limits on how much shared memory can be allocated (to prevent accidents). Work out how much memory you're going to allocate to shared_buffers, double it, then echo that value (in bytes) to /proc/sys/kernel/shmmax and /proc/sys/kernel/shmall, and persist that setting on reboot in /etc/sysctl.conf.


  • Units: kB
  • Default: 1MB
  • Recommended: See Text

Each PostgreSQL process that's handling an incoming connection needs some private work space of it's own to scribble down intermediate results (like the result of table joins and rows that need to be sorted), in addition to the shared space we specified with shared_buffers. The work_mem parameter is what we want to adjust to set how much memory each query can use.

The problem with this parameter is that it can very easily kill your database server if you set it wrong. Each "unit of work" that is running simultaneously will use up to this much memory -- and a query that has several "parts" is optimized out to be a separate unit of work. So, if you've got a complicated query you can have several times the work_mem setting worth of memory being used, and that's multiplied by the number of these queries running at once. Too much memory allocated means you go into swap, and as soon as you touch swap you're dead in the water, performance wise. This is one of the major settings that the default config of PostgreSQL is so pessimistic about (1MB? What is this, 1995?)

If you've got a fairly "simple" query profile (not so many joins and sorts going on), then set this value to Available Memory / max_connections. If your queries tend to be more complicated, then divide that by 2. If you typically run very close to max_connections connections, then consider dividing by 2 again, for paranoia's sake. If that gives you a number that isn't at least 16MB, buy more memory.


  • Units: kB
  • Default: 16MB
  • Recommended: Available Memory / 8

As the name suggests, this is the equivalent of work_mem, but for "maintenance" type queries -- VACUUM, CREATE INDEX, and so on. Since these queries aren't run very often, will hopefully be run when query load is light, and typically need a lot of space to execute, setting this setting nice and large will help make sure that your maintenance queries finish as quick as they can, and don't hold up other, more important, things.


  • Units: 8kB pages
  • Default: 64kB
  • Recommended: 8MB

This setting controls how much space might be used to cache write-ahead log entries in a heavily-loaded scenario. The default is pitifully small, and has problems on modern SMP systems. Experts Agree that 8MB is a good value, which is good enough for me.


  • Units: None
  • Default: 3
  • Recommended: 16 to 128

The write-ahead log (WAL) keeps a record of changes that need to be applied to the main database files. It's quicker to just append a log of what needs to be changed to the WAL rather than have to grub through the database itself every time. What happens then is that when there's a lull in queries, the database engine then goes through the WAL and writes out the changes that have accumulated in the WAL to the main database store, keeping everything tidy.

If the WAL gets full, though, before the background writer has a chance to write things out, then the database basically has to stop doing anything while the WAL gets dealt with. This is a massive performance hit, and something you want to avoid.

The checkpoint_segments parameter basically says how many 16MB files will be used to store the WAL. You must have at least (checkpoint_segments * 2 + 1) * 8MB worth of disk space available to store the WAL, otherwise Bad Things will most definitely happen.


  • Units: 8kB pages
  • Default: 65536
  • Recommended: Available Memory * 0.75

The PostgreSQL query planner is a very clever (and complicated) beast. It takes all sorts of estimates about how long various things take to do and crunches them around to work out how best to execute a query -- "do I use an index, scan the table, or eviscerate a goat and read it's entrails?".

Most of these parameters aren't things you want to be fiddling around with until you've got a solid idea of how the query planner works, but effective_cache_size is fairly straightforward to set, and helps the query planner do it's job. The recommended value above is a reasonable setting, all things considered.

CPU Operation Costs

These variables are related to how long the query planner thinks it will take to perform certain operations, and uses these values to decide whether to do the operation again or get the info it needs some other way. The CPU costs were set to certain defaults waaay back when CPUs were quite a bit slower, so setting these to something lower is a no-brainer:

  • cpu_tuple_cost = 0.0030

  • cpu_index_tuple_cost = 0.0010

  • cpu_operator_cost = 0.0005

These days, CPUs aren't really getting any faster at processing individual instructions, so with a bit of luck these values won't need to be revised for a loooong time.

Settings for import

All of these settings are great for a general-purpose query load, and should be used for most general-purpose operations. However, there is one situation in which a radically different tuning can do wonders for your database -- when you're importing your monster database for the first time.

The trouble with your standard tune is similar to that of the PostgreSQL defaults -- it's great for what it's intended for (servicing a lot of simultaneous users running general-purpose data retrieval queries and some writes), but is sub-optimal when faced with a different reality. An import is just a gigantic pile of data writing, with no reads, more maintenance queries, and only one user, so adjusting things to handle that particular case can make a dramatic difference to the speed of your import.

Note that these are not suitable settings for day-to-date database operation; in fact, running your database like this normally will almost certainly result in data loss, pestilence, dogs laying with cats, and that sort of thing. Keep a note of the values you previously set, so you can go back to them after your import is done.

fsync = off
shared_buffers = [1/3 of available memory]
wal_buffers = 1MB
checkpoint_timeout = 1h
checkpoint_segments = 300  # Have a *lot* of spare disk space for this
maintenance_work_mem = [1/3 of available memory]

One final note on imports -- run a VACUUM ANALYZE over your database after your import is finished, to update the internal statistics. If you don't do this, then the query planner won't know when it's better to use indexes, and your queries are guaranteed to run dog-slow. Been there, done that, wasted lots of time optimizing queries that didn't need to be optimized.

Query Tuning

Once you've got your database singing, you can take a look at optimizing individual queries. First, turn on slow query logging so that you know what queries are running slowly:

log_min_duration_statement = 100

Then check the logs for the queries that are taking a long time, and apply the lessons in our query optimisation tutorial.

Further Reading