OpenStack and in particular the compute service, Nova, has a useful rebuild function that allows you to rebuild an instance from a fresh image while maintaining the same fixed and floating IP addresses, amongst other metadata. However if you have a shared storage back end, such as Ceph, you’re out of luck as this function […]
In late 2013 we looked around the company and asked ourselves the questions any management team has to ask: what are we doing, where do we need to be, and what’s holding us back from getting there? Because Anchor had grown organically across many years, our internal procedures for infrastructure management were spread across a number of tools […]
Anchor has been working on building a massively scalable data vault for metrics data. One of our engineers has written a blog post about what worked – and what didn’t – in the first version, and what we’ve learned from it.
As you’ve probably noticed, we’ve been evaluating Ceph in recent months for our petabyte-scale distributed storage needs. It’s a pretty great solution and works well, but it’s not the easiest thing to setup and administer properly. One of the bits we’ve been grappling with recently is Ceph’s CRUSH map. In certain circumstances, which aren’t entirely […]
We’re currently in the process of beta-testing RADOS Gateway with a view to producing a viable product, it’s an S3-compatible cloud storage solution. We’ve done a good amount of smoketesting and turned up our fair share of buggy behaviour, but what it really needs is a good shakedown. Thus, Anchor’s first hackfest was held last […]
RADOS Gateway (henceforth referred to as radosgw) is an add-on component for Ceph, large-scale clustered storage now mainlined in the Linux kernel. radosgw provides an S3-compatible interface for object storage, which we’re evaluating for a future product offering. We’ve spent the last few days digging through radosgw source trying to nail a some pesky bugs. […]
We’ve been looking at Ceph recently, it’s basically a fault-tolerant distributed clustered filesystem. If it works, that’s like a nirvana for shared storage: you have many servers, each one pitches in a few disks, and the there’s a filesystem that sits on top that visible to all servers in the cluster. If a disk fails, […]