You’re reading Signal v. Noise, a publication about the web by Basecamp since 1999. Happy .

Noah

About Noah

Noah Lorang is the data analyst for Basecamp. He writes about instrumentation, business intelligence, A/B testing, and more.

The performance impact of "Russian doll" caching

Noah
Noah wrote this on 7 comments

One of the key strategies we use to keep the new Basecamp as fast as possible is extensive caching, primarily using the “Russian doll” approach that David wrote about last year. We stuffed a few servers full of RAM and started filling up memcached1 instances.
A few times in the last two years we’ve invalidated large swaths of cache or restarted memcached processes, and observed that our aggregate response time increases by 30-75%, depending on the amount of invalidation and the time of day. We then see caches refill and response times return to normal within a few hours. On a day-to-day basis, caching is incredibly important to page load times too. For example, look at the distribution of response time for the overview page of a project in the case of a cache hit (blue) or a miss (pink): Median request time on a cache hit is 31 milliseconds; on a miss it jumps to 287 milliseconds.
Until recently, we’ve never taken a really in-depth look at the performance impact of caching on a granular level. In particular, I’ve long had a hypothesis that there are parts of the application where we are overcaching; I believed that there are likely places where caching is doing more harm than good.

Hit rate: just the starting point

From the memcached instances themselves (using memcache-top), we know we achieve roughly a 67% hit rate: roughly two out of every three requests we make to the caching server has a valid fragment to return. By parsing2 Rails logs, we can break apart this overall hit rate into a hit rate for each piece of cached content.
Unsurprisingly, there’s a wide range of hit rates for different fragments. At the top of the heap, cache keys like views/projects/?/index_card/people/?3 have a 98.5% hit rate. These fragments represent the portion of a project “card” that contains the faces of people on the project:

This fragment has a high hit rate in large part because it’s rarely updated—we only “bust” the cache when someone is added or removed from a project or some other permissions change is made, which are relatively infrequent events.
At the other end of cache performance with a 0.5% hit rate are views/projects/?/todolists/filter/? fragments, which represent the filters available on the side of a projects full listing of todos:

Because these filters are based on who is on a project and what todos are due when, the cache here is busted every time project access or any todo is updated. As a result, we rarely have a cached fragment available here, and 99 times out of 100 we end up rendering the fragment from scratch.
Hit rate is a great starting point for figuring out what caching is likely effective and what isn’t, but it doesn’t tell the whole story. Caching isn’t free – memcached is blazingly fast, but you still incur some overhead with every cache request whether you get a hit or a miss. That means that a cache fragment with a low hit rate that is also quick to render on a miss might be better off not being cached at all — the costs of all of the misses (the fruitless memcache request) outweigh the benefits of a hit. Conversely, a low hit rate isn’t always bad—a template that is extremely slow to render might still benefit on net even if only 10% of cache requests are successful.

Calculating net cache impact

Continued…

June was a great month

Noah
Noah wrote this on 6 comments

The 37signals Report Card was launched a few months ago, and this month it brings good news across the board.

Our support team made customers happier faster than ever

With 22,000 emails and 7,000 tweets handled in June, the support team blazed a speedy path, with 93% of emails received during our extended business hours answered within an hour. The average email was replied to in just 6 minutes. Chase recently wrote about how we keep those response times down.
The support team also kept customers happy: 94% said they had a great experience in June.

Our applications got a little faster

A few months ago we decided to replace one of the core pieces of our infrastructure: the firewall and load balancer appliances that all of our applications pass through. As we’ve been working on expanding into a second datacenter, we had the opportunity to try out some new equipment that offers dramatic simplification and performance improvements, and we decided to pull the trigger on rolling them out everywhere.
In mid-May, we switched over to our new F5 BigIP appliances in our primary datacenter in Chicago, and customers started to see the performance benefits we’d seen in our testing. The exact impact varies depending on the application and where you are in the world, but most customers are seeing between a 5% to a 25% improvement in overall page load times (overall, we’re running at about a 12% improvement across all of our customers and applications in the six weeks since we rolled these out). This speedup is especially noticeable when downloading large files.
We’re working on a handful of other projects that we hope will bring further speed improvements to our applications in the coming months.

Basecamp got a load of new features

Basecamp continues to improve. Just this month, we saw:

  • A whole new approach to documents, including mobile support and visual tracking of changes.
  • A new and clean look to the emails that Basecamp sends out about your projects, todos, and events.
  • Improvements to the event history throughout Basecamp. There’s less noise and more useful information throughout comment threads, people pages, and the timeline.
  • A ton of bug fixes and upgrades. In all, we deployed Basecamp 207 different times this month.

We’ve got a ton of other great features lining up to launch in the next few months. Stay tuned for future announcements and keep an eye on our performance to see how we’re doing every month.

We answered your questions for an hour live on a Google Hangout. Missed it? You can watch the whole thing here, and stay tuned for more of these in the future.

Ask us anything Thursday at 12:30 p.m. Eastern

Noah
Noah wrote this on 17 comments

Have a question about one of our products, our technology, how we work, or anything else? Here’s your opportunity to ask.

We’ll be answering your questions live on a Google+ Hangout on Air tomorrow (Thursday, June 27th) from 12:30 to 1:30 p.m. Eastern time.

Check out the event here, where you’ll find the video link tomorrow. You can submit questions via Google Moderator or leave them in the comments here.

We’ll have a range of people from 37signals participating to answer your questions, from designers to support and everyone in between. Almost anything is fair game to ask (we’re a private company, so won’t be divulging any financial information, nor will we spill the beans on future features).

We hope you’ll join us. If you can’t make it tomorrow, keep an eye out for future events!

What does mechanical engineering have to do with data science?

Noah
Noah wrote this on 6 comments

Engineering school is about learning how to frame problems. So is data science.

I have a degree in mechanical engineering from a good school, but I’ve never worked a day in my life as an engineer. Instead, I’ve dedicated my career to “data science” — I help people solve business problems using data. Despite never working as a mechanical engineer, that education dramatically shapes how I do my job today.
Most baccalaureate mechanical engineering programs require you to take ten or fifteen core classes that are specific to the domain: statics, stress analysis, dynamics, thermodynamics, heat transfer, fluid dynamics, capstone design, etc. These cover a lot of content, but only a tiny fraction of what you actually face in practice, and so by necessity mechanical engineering programs are really about teaching you how to think about solving problems.
My thermodynamics professor taught us two key things about problem solving that shape how I solve data problems today.

“Work the process”

On the first day of class, rather than teach us anything about entropy or enthalpy, he taught us a twelve step problem solving process. He said that the way to solve any problem was to take a piece of paper and write in numbered sections the following:

  1. Your name
  2. The full problem statement
  3. The ‘given’ facts
  4. What you’ve been asked to find
  5. The thermodynamic system involved
  6. The physical system involved
  7. The fundamental equations you will use
  8. The assumptions you are making
  9. The type of process involved
  10. Your working equations
  11. Physical properties or constants
  12. The solution

The entire course was based on this process. Follow the process and get the wrong answer? You’ll still get a decent grade. Don’t follow the process but get the right answer anyway? Too bad.
Some of these steps are clearly specific to thermodynamic problems, but the general approach is not. If you start from a clear articulation of the problem, what you know, what you’re trying to solve for, and the steps you will take to solve it, you’ll get to the right answer most of the time, no matter how hard the problem looks at the start.

“There is no voodoo”

The other thing that this professor taught us right away was that there was no “voodoo” in anything we were going to study, and that everything can be explained if you take the time to understand it properly.
I’d argue that the fundamental reason why data science is a hot topic now is that businesses want to understand why things happen, not just what is happening — they want to peel back the voodoo. There’s always a fundamental reason: applications don’t suddenly get slow without an underlying cause, nor do people start or stop using a feature without something changing. We may not always be able to find the reasons as well as we’d like, but there is fundamentally an explanation, and the job of a diligent engineer or data scientist is to look for it.

It was totally worth it

People sometimes ask me if I feel like I wasted my time in college by not studying statistics or computer science since the career I’ve ended up in is more closely aligned to those. My answer is a categorical “no” — I can’t imagine a better education to prepare me for data science.

My mother made me a scientist without ever intending to. Every other Jewish mother in Brooklyn would ask her child after school, “So? Did you learn anything today?” But not my mother. “Izzy,” she would say, “did you ask a good question today?”
That difference – asking good questions – made me become a scientist.


Isidor Isaac Rabi, Nobel laureate

Three charts are all I need

Noah
Noah wrote this on 18 comments

The last few years have seen an explosion in new ways of visualizing data. There are new classes, consultants, startups, and competitions. Some of these new and more “daring” visualizations are great. Some are not so great – many “infographics” are more like infauxgraphics.
In everyday business intelligence (the “real world”), the focus isn’t on visualizing information, it’s on solving problems, and I’ve found that upwards of 95% of problems can be addressed using one of three visualizations:

  1. When you want to show how something has changed over time, use a line chart.
  2. When you want to show how something is distributed, use a histogram.
  3. When you want to display summary information, use a table.

These are all relatively “safe” displays of information, and some will criticize me as resistant to change and fearful of experimentation. It’s not fear that keeps me coming back to these charts time and time again: it’s for three very real and practical reasons.

Continued…

Why I learned to make things

Noah
Noah wrote this on 19 comments

Two years ago this week, I started working at 37signals. I couldn’t make a web app to find my way out of a paper bag.

When I started working here, my technical skills were in tools like Excel, R, and Matlab, and I could muddle my way through SQL queries. I had the basic technical skills that are needed to do analytics for a company like 37signals: just enough to acquire, clean, and analyze data from a variety of common sources.
At the time I started here, I knew what Ruby and Rails were, but had absolutely no experience with them – I couldn’t tell Ruby from Python or Fortran. I’d never heard of git, Capistrano, Redis, or Chef, and even once I figured out what they were I didn’t think I’d ever use them – those were the tools of “makers”, and I wasn’t a maker, I was an analyst.

I was wrong.

Continued…

Behind the Scenes: Twitter, Part 3 - A win for simple

Noah
Noah wrote this on 4 comments

This is the final part of a three part series on how we use Twitter as a support channel. In the first part, I described the tools we use to manage Twitter; in the second part, we built a model to separate tweets into those that need an immediate response or not.

In the arc of a three part series, the final part is supposed to be either a story of triumph or an object lesson.
The triumphant story would be about how we implemented the model we built previously in production, integrated it with our Rails-based Twitter client, and saw massive quantifiable improvements resulting from it. I would look smart, competent, and impactful.
The object lesson is that sometimes practical concerns win out over a neat technological solution, and that’s the story here.

Sometimes good isn’t good enough

The model we built had a false positive rate of about 7%. That’s fair, and in many applications, that would be just fine. In our case, we want to be very confident we’re not missing important tweets from people that need help. Practically, that means that someone would have to check the classification results occasionally to find the handful of tweets that do need an immediate response that slipped through.
After talking to the team, it became pretty clear that checking for mis-classified tweets would be more work than just handling the full, unclassified feed with the manual keyword filtering we have been using. This is a great example of a case where absolute numbers are more important than percentages: while the percentage impact in terms of filtering out less urgent tweets would be significant, the actual practical impact is much more muted because we’ve optimized the tool to handle tweets quickly.
Part of the reason why we’re able to get away with keyword filtering rather than something more sophisticated is because of just how accurate it is with essentially no false positives. There’s actually a surprising amount of duplication in tweets—excluding retweets, the last 10,000 tweets we’ve indexed have only 7,200 unique bodies among them. That means that when a person looks at the first tweet using a phrase they can instantly identify that there’s a keyword that’s going to reoccur (for example, as soon as I started this series, we added “Behind the Scenes: Twitter” to the keyword list) and add it to the keyword list.

Most of the benefit with little effort

Continued…

Behind the Scenes: Twitter, Part 2 - Lessons from email

Noah
Noah wrote this on 4 comments

This is the second in a three part series about how we use Twitter as a support channel. Yesterday I wrote about how we use Twitter as a support channel and the internal tool that we built to improve the way we handle tweets.

One of our criteria in finding or building a tool to manage Twitter was the ability to filter tweets based on content in order to find those that really need a support response. While we’re thrilled to see people sharing articles like this or quoting REWORK, from a support perspective our first goal is to find those people who are looking for immediate support so that we can get them answers as quickly as possible.
When we used Desk.com for Twitter, we cut down on the noise somewhat by using negative search terms in the query that was sent to Twitter: rather than searching just for “37signals”, we told it to search for something like “37signals -REWORK”. This was pretty effective at helping us to prioritize tweets, and worked especially well when there were sudden topical spikes (e.g., when Jason was interviewed in Fast Company, more than 5,000 tweets turned up in a generic ‘37signals’ search in the 72-hour period after it was published), but had it’s limitations: it was laborious to update the exclusion list, and there was a limit placed on how long the search string could be, so we never had great accuracy.
When we went to our own tool, our initial implementation took roughly the same approach—we pulled all mentions of 37signals from Twitter, and then prioritized based on known keywords: links to SvN posts and Jobs Board postings are less likely to need an immediate response, so we filtered accordingly.
Using these keywords, we were able to correctly prioritize about 60% of tweets, but that still left a big chunk mixed in with those that did need an immediate reply: for every tweet that needed an immediate reply, there were still three other tweets mixed in to the stream to be handled.
I thought we could do better, so I spent a little while examining whether a simple machine learning algorithm could help.

Lessons from email

While extremely few tweets are truly spam, there are a lot of parallels between the sort of tweet prioritization we want to do and email spam identification:

  • Have some information about the sender and the content.
  • Have some mechanism to correct classification mistakes.
  • Would rather err on the side of false negatives: it’s generally better to let spam end up in your inbox than to send that email from your boss into the spam folder.

Spam detection is an extremely well studied problem, and there’s a large body of knowledge for us to draw on. While the state of the art in spam filtering has advanced, one of the earliest and simplest techniques generally performs well: Bayesian filtering.

Bayesian filtering: the theory

Continued…