Tag

database Archives - AWS Managed Services by Anchor

Extending PostgreSQL with high level languages (and cats)

By | Technical | No Comments

In a recent post we extolled the virtues of creating your own brand new operators in PostgreSQL. SELECT =^_^= FROM happycats; That’s well and good, but the output was a little lacklustre, returning “meow” for every tuple. We’d like to make it more interesting, and one way to add interesting functionality to Postgres is to embed a procedural language. This lets you juggle data with a little more finesse when it comes to certain operations, compared to the usual relational algebra. We’re going to use Perl because it’s easy to integrate with Postgres, and is generally a quick and dirty way to Get Stuff Done. When embedded in Postgres it’s referred to as PL/Perl. Let’s get started. We begin by “installing” the language into the database in which we wish…

Read More

Extending PostgreSQL for fun: with cats

By | Technical | 2 Comments

Perhaps you’ve thought I wish I had more cats in my Postgres database before. We certainly have. Just the other day we were lamenting some of the differences between MySQL and PostgreSQL, particularly the way that MySQL has case-insensitive matching using the LIKE operator, while Postgres has LIKE and ILIKE. This got us thinking, it’d be amusing to have more vague (and hilariously unwieldy) operators, such as: SELECT * FROM foo WHERE a VAGUELY RESEMBLES b; Something like this isn’t too implausible. It’s similar to what a full-text search entails, but it’s far from trivial. This got us thinking: if we can’t easily do that, can we at least have some amusing query syntax? Yes we can!

Read More

Second strike with Lightning!

By | Technical | 2 Comments

We put Kyoto Cabinet under the gun recently as a means to improve Redis. The Anchor Propulsion/Internet Laboratory validated Kyoto Cabinet as “fresh”, but extended live testing has revealed sub-optimal behaviour in some situations. To recap, we used Kyoto Cabinet to give Redis near-realtime disk persistence with a greatly reduced memory footprint; we called this “NDS” and published the code. Dirty keys are flushed from memory periodically into Kyoto Cabinet’s backing store. This works fantastically most of the time, but we’ve discovered that some operations cause a massive blowout in the on-disk files. Kyoto Cabinet is a key-value store. When you update a value in Kyoto Cabinet it can be rewritten in-place, unless the new value is longer than the old one. In this case, Kyoto Cabinet makes a new…

Read More

Redis Rethought: Exciting Extremes with Larger-Than-Memory Datasets

By | Technical | 2 Comments

In a recent post we talked about Redeye, a way to manage a cluster of Redis shards. Sharding is important for pure storage capacity, but also to manage availability and performance, seeing as Redis can be CPU bound in a suffuciently large environment. This time we’re tackling the other side of the equation, simply being able to hold all your data without dropping it. You’ll recall that Redis is purely an in-memory datastore. This makes it amazingly fast, but volatile. To get around this you can have persistence across restarts using RDB and AOF files. The catch is that you’re ultimately limited by the amount of RAM you have available – if you have 500gb of data to store then you need 500gb of RAM, either in a single server…

Read More

Limitations of BDB under concurrent access

By | Technical | No Comments

Berkeley DB (BDB) isn’t exactly the most glamorous database engine around, but it’s surprisingly widely deployed and feature rich. It’s designed for embedded use, so you’ll tend to find it integrated into a lot of applications if you go looking. One of our internal development efforts is evaluating various key-value databases as a secondary store to Redis. Because it’s accessed via in-code API calls instead of the network protocols that everyone’s familiar with, access semantics are quite different. BDB supports multiple simultaneous readers and writers, but the circumstances under which this is safe weren’t immediately clear.

Read More

State of the Union for MySQL storage engines

By | Technical | No Comments

(Feel free to skip to the next section if you’re familiar with MySQL’s storage engines) If you’re not familiar with MySQL’s storage engines, it’s where your data hits the disk. You write SQL to insert or query the data, and leave the rest to MySQL. MySQL arranges that data into files on the disk by using what’s termed a storage engine. The choice of engine affects how fast your data can be inserted/queried/deleted, how it’s indexed, and how robustly it’s stored. You don’t need to make a conscious choice – unless you explicitly specify, you’ll get the default of MyISAM. For many people this is fine, but it makes a difference when you start to accumulate a lot of data. If you’re after straightforward advice, skip down to InnoDB or…

Read More

A Simple MySQL Relational Database

By | Technical | No Comments

I’ve searched far a wide and all I wanted is a MySQL database schema that incorporated all the common relationships that tie into a relational database i.e one to one, one to many and many to many but haven’t had much luck. So here you go a straight to the point post that will give you: a fully functional MySQL databse schema along with inserted dummy data a relational database with common relationships such as one to one, one to many and many to many a list of example select queries to use to see how you can create table joins between tables To download the database please click here Diagram Of The Database The following diagram will help give you a good visual of all the tables in the…

Read More

# mysql_secure_installation… Ya-ha-! (and ~/.my.cnf)

By | Technical | No Comments

I was setting up mysql-server for a customer recently and noticed something interesting – there’s a helpful script included with mysql called mysql_secure_installation. We thought about that for a moment and had a chuckle. Okay, that was a little unfair; it’s no secret that we prefer to use Postgres wherever possible, but the idea of having a “make it all secure” script isn’t too bad an idea, as long as it doesn’t produce a false sense of security. The script does good things, but MySQL could probably be doing things better to begin with – make it more secure out-of-the-box, and the last thing they should be doing is shipping it with an empty root password. >_< It pains me to say it, but I think MSSQL probably comes with…

Read More