Back in March of 2009 I joined 37signals as Signal #13 and the other half of our two person support team. At the time we relied mostly on bug reports from customers to identify rough spots in our software. This required the full time attention of one or more “on call programmers”– firefighters who tamed quirks as they arose. The approach worked for a while but we weren’t making software quality a priority.

I had a chat with Jason Fried in late 2011 about how my critical tendencies could help improve our products. Out of that, the QA department was born. Kind of. I didn’t know much about QA and it wasn’t part of the development process at 37signals. So my first move was to order a stack of books about QA to help figure out what the hell I was supposed to be doing.

It’s been almost two years since our first project “with QA” back in 2012. Ann Goliak (another support team alumnus) recently joined me at the stead. Our QA process isn’t traditional and goes a bit different for every feature. Here’s a look at how QA fits into our development process, using the recent phone verification project as an example.

Step 1. I sat down with Sam Stephenson back in early July for our first walkthrough of phone verification. Hearing Sam talk about “creating a verification profile” or “completing a verification challenge” familiarized me with the terminology and flows that would be helpful descriptors in my bug reports. Here’s what the notes look like from that first conversation with Sam.
Step 2. After the introduction I’ll dive right into clicking around in a staging or beta environment to get a feel for the feature and what other parts of the app it touches. This is often the first time that someone not designing/coding the feature has a chance to give it a spin, and the fresh perspective always produces some new insights.
Step 3. There are lots of variables to consider when testing. Here are some of the things we keep in mind when putting together a test plan:

  • Does the API need to be updated to support this?
  • Does this feature affect Project templates?
  • Does this feature affect Basecamp Personal?
  • Does our iPhone app support it?
  • Do our mobile web views need to be updated?
  • Does this impact email-in?
  • Does this impact loop-in?
  • Does this impact moving and copying content?
  • Does this impact project imports from Basecamp Classic?
  • Test at various BCX plan levels
  • Test at various content limits (storage, projects)

Project states

  • Active project, Archived project, Project template, Draft (unpublished) project, Trashed project.

Types of content

  • To-do lists, To-do items (assigned + unassigned + dated), Messages, Files, Google docs, Text documents, Events (one time + recurring).


  • Progress screen, In-project latest activity block, History blocks (for each type of content), Calendar, Person pages, Trash, Digest emails.

When these variables are combined you end up with a script of tasks like this one to guide the testing. These lists are unique for each project.
Step 4. In Basecamp we make a few QA-specific to-do lists in each project: the first for unsorted discoveries, a second for tasks that have been allocated, and a third for rough spots support should know about (essentially “known issues”).

When I find a bug I’ll make a new to-do item that describes it including: 1) A thorough description of what I’m seeing, often with a suggested fix; 2) Specific steps to recreate the behavior; 3) The browser(s) and/or platform(s) where this was observed; and 4) Relevant URLs, screenshots, or a screen recording.

We use ScreenFlow to capture screen recordings on the Mac, and Reflector to do the same in iOS. We’re fans of LittleSnapper (now Ember) for annotating and organizing still screenshots.
Step 5. The designer and programmer on the project will periodically sift through the unsorted QA inbox. Some items get moved to the QA allocated list and fixed, then reassigned to QA for verification. Other “bugs” will trigger a conversation about why a decision was intentional, or outside the scope of the iteration.
Step 6. Before each new feature launch, QA hosts a video walkthrough for the support team. We’ll highlight any potential areas of confusion and other things to be on the lookout for. After the walkthrough, a member of support will spend some time putting together a help section page that covers the new feature.
Step 7. Within a couple weeks after a feature launch the team will usually have a retrospective phone call. We talk the highs and lows of the iteration and I use the chance to ask how QA can be better next time around.
At the end of a project there are usually some “nice to haves” and edge-cases that didn’t make the pre-launch cut. These bugs get moved into a different Basecamp project used for tracking long standing issues, then every few months we’ll eradicate some of them during a company-wide “bug mash”.
So that’s a general overview of how QA works at 37signals. We find anywhere from 30-80 bugs per project. Having QA has helped reduce the size of our on-call team to one. The best compliment: After trying it out, no one at the company was interested in shipping features without dedicated QA.