When we started working on Basecamp Next last year, we had much internal debate about whether we should evolve the existing code base or rewrite it. I’ll dive more into that debate further in a later post, but one of the key arguments for a rewrite was that we wanted a dramatic leap forward in speed — one that wouldn’t be possible through mere evolution.

Speed is one of those core competitive advantages that have long-term staying power. As Jeff Bezos would say, nobody is going to wake up 10 years from now and wish their application was slower. Investments in speed are going to pay dividends forever.

Now for the secret sauce. Basecamp is so blazingly fast for two reasons:

#1: Stacker – an advanced pushState-based engine for sheets

The Stacker engine reduces HTTP requests on a per-page basis to a minimum by keeping the layout the same between requests. This is the same approach used by pjax and powered by the same HTML5 pushState.

This means that only the very first request spends time downloading CSS, JavaScript, and image sprites. Every subsequent request will only trigger a single HTTP request to get the HTML that changed and whatever additional images needed. You not only save the network traffic doing it like this, you also save the JavaScript compilation step.

It’s a similar idea to JavaScript-based one-page apps, but instead of sending JSON across the wire and implementing the whole UI in client-side MVC JavaScript, we just send regular HTML. From a programmers perspective, it’s just like a regular Rails app, except Stacker requests do not require rendering the layout.

So you get all the advantages of speed and snappiness without the degraded development experience of doing everything on the client. Which is made double nice by the fact that you get to write more Ruby and less JavaScript (although CoffeeScript does make that less of an issue).

Our Stacker engine even temporarily caches each page you’ve visited and simply asks for a new version in the background when you go back to it. This makes navigation back and forth even faster.

Now Stacker is purposely built for the sheet-based UI that we have. It knows about sheet nesting, how to break out of a sheet chain, and more. We therefore have no plans of open sourcing it. But you can get (almost) all the speed benefits of this approach simply by adopting pjax, which is actually where we started for Basecamp Next until we went fancy with Stacker.

#2: Caching TO THE MAX

Stacker can only make things appear so fast. If actions still take 500ms to render, it’s not going to have that ultra snappy feel that Basecamp Next does. To get that sensation, your requests need to take less than 100ms. Once our caches are warm, many of our requests take less than 50ms and some even less than 20ms.

The only way we can get complex pages to take less than 50ms is to make liberal use of caching. We went about forty miles north of liberal and ended up with THE MAX. Every stand-alone piece of content is cached in Basecamp Next. The todo item, the todo lists, the block of todo lists, and the project page that includes all of it.

This Russian doll approach to caching means that even when content changes, you’re not going to throw out the entire cache. Only the bits you need to and then you reuse the rest of the caches that are still good.

This is illustrated in the picture above. If I change todo #45, I’ll have to bust the cache for the todo, the cache for the list, the cache for all todolists, and the cache for the page itself. That sounds terrible on the surface until you realize that everything else is cached as well and can be reused.

So yes, the todolist cache that contains todo #45 is busted, but it can be regenerated cheaply because all the other items on that list are still cached and those caches are still good. So to regenerate the todolist cache, we only pay the price of regenerating todo #45 plus the cost of reading the 7 other caches — which is of course very cheap.

The same plays out for the entire todolist section. We just pay to regenerate todolist #67 and then we read the existing caches of all the other todolist caches that are still good. And again, the same with the project page cache. It’ll just read the caches of discussions etc and not pay to regenerate those.

The entire scheme works under the key-based cache expiration model. There’s nothing to manually expire. When a todo is updated, the updated_at timestamp is touched, which triggers a chain of updates to touch the todolist and then the project. The old caches that will no longer be read are simply left to be automatically garbage collected by memcached when it’s running low on space (which will take a while).

Thou shall share a cache between pages

To improve the likelihood that you’re always going to hit a warm cache, we’re reusing the cached pieces all over the place. There’s one canonical template for each piece of data and we reuse that template in every spot that piece of data could appear. That’s general good practice when it comes to Rails partials, but it becomes paramount when your caching system is bound to it.

Now this is often quite easy. A todo looks the same regardless of where it appears. Here’s the same todo appearing in three different pages all being pulled from the same cache:

The presentation in the first two pages is identical and in the last we’ve just used CSS to bump up the size a bit. But still the same cache.

Now some times this is not as easy. We have audit trails on everything in Basecamp Next and these event lines need to appear in different context and with slight variations on how they’re presented. Here are a few examples:

To allow for these three different representations of the cached HTML, we wrap all the little pieces in different containers that can be turned on/off and styled through CSS:

Thou shall share a cache between people

While sharing caches between pages is reasonably simple, it gets a tad more complicated when you want to share them between users. When you move to a cache system like we have, you can’t do things like if @current_user.admin? or if @todo.created_at.today?. Your caches have to be the same for everyone and not be bound by any conditionals that might change before the cache key does.

This is where a sprinkle of JavaScript comes handy. Instead of embedding the logic in the generation of the template, you decorate it after the fact with JavaScript. The block below shows how that happens.

It’s a cached list of possible recipients of a new message on a given project, but my name is not in it, even though it’s in the cache. That’s because each checkbox is decorated with a data-subscriber-id HTML attribute that corresponds to their user id. The JavaScript reads a cookie that contains the current user’s id, finds the element with a matching data-subscriber-id, and removes it from the DOM. Now all users can share the same user list for notification without seeing their own name on the list.

Combining it all and sprinkling HTTP caching and infinite pages on top

None of these techniques in isolation are enough to produce the super snappy page loads we’ve achieved with Basecamp Next, but in combination they get there. For good measure we’re also diligent about using etags and last-modified headers to further cut down on network traffic. We also use infinite scrolling pages to send smaller chunks.

Getting as far as we’ve gotten with this system would have been close to impossible if we had tried to evolve our way there from Basecamp Classic. This kind of rearchitecture was so fundamental and cuts so deep that we often used it in feature arguments: That’s hard to cache, is there another way to do it?

We’ve made speed the center piece of Basecamp Next. We’re all-in on having one of the fastest web applications out there without killing our development joy by moving everything client-side. We hope you enjoy it!

tl;dr: We made Basecamp Next go woop-woop fast by using a fancy HTML5 feature and some serious elbow grease on them caching wheels