Next up in the Nuts & Bolts series, I want to cover storage. There were a number of questions about our storage infrastructure after my new datacenter post asking about the Isilon storage cluster that is pictured.

To set the stage, I’ll share some file statistics from Basecamp. On an average week day, there are around 100,000 files uploaded to Basecamp with an average file size that is currently 2MB for a total of about 200GB per day of uploaded content in Basecamp. And that’s just Basecamp! We have a number of other apps that handle tens of thousands of uploaded files per day as well. Based on that, you’d expect we’d need to handle maybe 60TB of uploaded files over the next 12 months, but those numbers don’t take into account the acceleration in the amount of data uploaded. Just since January we’ve seen an increase in the average file size uploaded from 1.88MB to 2MB and our overall storage consumption rate has increased by 50% with no signs of slowing down.

When I sat down to begin planning our move from Rackspace to our new environment, I looked at a variety of options. Our previous environment consisted of a mix of MogileFS and Amazon S3. When a customer uploaded a file to one of our applications we would immediately store the file in our local MogileFS cluster and it would be immediately available for download. Asynchronously, we would upload the file to S3, and after around 20 minutes, we would begin serving it directly from S3. The staging of files in MogileFS was necessary to account for the eventually consistent nature of S3.

While we’ve been generally happy with that configuration, I thought that we could save money over the long term by moving our data out of S3 and onto local storage. S3 is a phenomenal product, and it allows you to expand storage without having to worry much about capacity planning or redundancy, but it is priced at a comparative premium. With that premise in mind I crunched some numbers and was even more convinced that we could save money on our storage needs without sacrificing reliability and while reducing the complexity of our file workflow at the same time.

The main contenders for our new storage platform were either an expanded MogileFS cluster or a commercial NAS. We knew that we did not want to have to juggle LUNs or a layer like GFS to manage our storage, so we were able to eliminate traditional SAN storage as a contender fairly early on. We have had generally good luck with MogileFS, but have had some ongoing issues with memory growth on some of our nodes and have had at least a couple of storage related outages over the past couple of years. While the user community around MogileFS is great, the lack of commercial support options raises its head when you have an outage.

After weighing all of the options, we decided to purchase a commercial solution and we settled on Isilon as the vendor for our storage platform. Protecting our customer’s data is our most important job and we wanted a system that we could be confident in over the long term. We initially purchased a 4 node cluster of their 36NL nodes, each with a raw capacity of 36TB. The usable capacity of our current cluster with the redundancy level we have set is 108TB. We’ve already ordered another node to expand our usable space to 144TB in order to keep pace with the storage growth that took place between the time we planned the move and when we implemented it.

The architecture of the Isilon system is very interesting. The individual nodes interconnect with one another over an InfiniBand network (SDR or 10 Gbps right now) to form a cluster. With the consistency level we chose, each block of data that is written to the cluster is stored on a minimum of two nodes in the cluster. This means that we’re able to lose an entire node without affecting the operation of our systems. In addition, the nodes cooperate with one another to present the pooled storage to our clients as a single very large filesystem over NFS. Isilon also has all the features like snapshots, replication, quotas, and so on that you would expect from a commercial NAS vendor. These weren’t absolute requirements, but they certainly make management simpler for us and are a welcome addition to the toolbox.

As we grow, it’s very simple to expand the capacity of the cluster. You just rack up another node, connect it to the InfiniBand backend network and to the network your NFS clients are connected to and push a button. The node configures itself into the existing cluster, its internal storage is added to the global OneFS filesystem, its onboard memory is added to the globally coherent cache, and its CPU is available to help process I/O operations. All in about a minute. It’s pretty awesome stuff, and we had fun testing these features in our datacenter when we were deploying it.

For now, we continue to use Amazon S3 as a backup, but we intend to replace it with a second Isilon cluster in a secondary datacenter which we’ll keep in sync via replication within the next several months.