Redis Rethought: Exciting Extremes with Larger-Than-Memory Datasets

By April 4, 2013Technical

In a recent post we talked about Redeye, a way to manage a cluster of Redis shards. Sharding is important for pure storage capacity, but also to manage availability and performance, seeing as Redis can be CPU bound in a suffuciently large environment.

This time we’re tackling the other side of the equation, simply being able to hold all your data without dropping it.

You’ll recall that Redis is purely an in-memory datastore. This makes it amazingly fast, but volatile. To get around this you can have persistence across restarts using RDB and AOF files. The catch is that you’re ultimately limited by the amount of RAM you have available – if you have 500gb of data to store then you need 500gb of RAM, either in a single server or spread across a cluster of Redis instances.

NDS changes that.

The truth is that you don’t need all that data in RAM all the time. In many cases only a small proportion of data in Redis is heavily used, or “hot”. Facing the wastage of managing hundreds of large Redis instances with low levels of real utilisation, we developed the Naive Disk Store (NDS).

Our chief support manager and sometimes-codecutter, Matt Palmer, approached the problem by thinking about things from the other end: rather than leaving Redis to its own devices and trying to sort out a suitable persistence backend, we chose the on-disk backend first and then put Redis in front as a caching layer. Experienced readers will note that this is roughly how memcached is used – the difference is that we get to keep the best bits of Redis, its high performance, broad adoption, and rich set of data types.

It’s important that we clear up just who we had in mind when creating this:
NDS is intended for really big Redis instances with a relatively small set of active keys. Now instead of needing enough RAM to hold all your data, you only need enough to hold your hot data.

The reality of this for us is that we can provision Redis instances with a 256mb or 512mb memory limit instead of 16gb. The only thing we need to worry about now is CPU and not how much RAM each new server holds.

Okay, so let’s say that NDS is what you need – you probably want to know what’s under the hood? Put simply, NDS is Redis backed onto a Kyoto Cabinet database. We chose Kyoto Cabinet because of its tried-and-tested dbm semantics and high performance.

In doing this we needed to balance data durability with performance. Your datastore might have web-scale performance, but that usually doesn’t count for much if the data evaporates when the server reboots. We think we’ve found the right compromise: modified keys are updated in Redis as normal, and appended to an in-memory list of dirty keys. Dirty keys are synced to Kyoto Cabinet periodically, controlled by the same logic as RDB dumps. In the event of a crash, you only lose updates since the last flush.

Heavily updated keys reap the greatest benefit, thanks to a confluence of two factors. Because writes to disk (Kyoto Cabinet) are asynchronous they don’t impede Redis’ usual high speed operation. In addition, dirty keys are only added flagged once between flushes. A million updates to a single key would create 1 million entries in an AOF log file. In contrast, NDS will see a single entry in the list of dirty keys, and incur the I/O cost of just one update when flushed.

If you’ve been using Redis as a persistent data store you’ve probably been running without setting the maxmemory parameter – not any more. Redis normally uses the memory limit as a signal to evict keys, which means discarding data; not something you usually want to do! With NDS, evicting a key no longer means losing the data, as it’s still kept on disk. Data is only discarded if explicitly requested by the DEL command.

NDS also has some interesting side benefits owing to the way it works:

  • Nearly instant startup: Because NDS is disk-backed, there’s no need to load the dataset into memory by reading a large RDB or AOF file before serving clients. Redis is effectively a cold cache until requests start coming in (and warms up quickly if you enable preloading).
  • Disk-efficient, near-real-time persistence: Diskspace usage isn’t significantly more than an RDB file, and you never have to trigger an AOF-rewrite or complete RDB dump — and yet, you still get up-to-date persistence.

This isn’t without tradeoffs, but we think the compromises are well balanced.

  • Performance will be poor immediately after startup as every request needs to be serviced from disk while the cache warms up. This isn’t too awful, as Redis would normally be out of service while it reads its RDB or AOF file anyway.
  • If you regularly access most of your keyspace then performance will still be awful. Sorry, you’re back to buying more RAM for your servers.

At the end of the day, all we really want is a high-performance datastore for our customers that can store lots of data safely. NDS gives us that, without the overheads of juggling Redis’ enormous dump files. Backing up Kyoto Cabinet files is easy, as the on-disk state is consistent when not in use, which is fully internal to NDS so there’s no surprises.

You want to try out NDS

Happily for you, we’ve got code! Matt has published his code at Github, you’ll need to checkout the nds-2.6 or nds-unstable branch. Dependencies are straightforward so it should be a simple matter of running make and firing up the resultant executable. The README is well worth a read for understanding the necessary config directives.

NDS is very much experimental at the moment, so you shouldn’t use it for any data that’s irreplaceable. That said, we’re giving it a thorough workout and it’s going to make a big difference for how we manage the Redis instances in our care. We’d definitely welcome any feedback (and bug reports) that you’ve got. 🙂

Are you a bad enough dude or dudette to hack on Redis and save the president? We’re hiring.

2 Comments

Leave a Reply