Back in March of 2009 I joined 37signals as Signal #13 and the other half of our two person support team. At the time we relied mostly on bug reports from customers to identify rough spots in our software. This required the full time attention of one or more “on call programmers”– firefighters who tamed quirks as they arose. The approach worked for a while but we weren’t making software quality a priority.
I had a chat with Jason Fried in late 2011 about how my critical tendencies could help improve our products. Out of that, the QA department was born. Kind of. I didn’t know much about QA and it wasn’t part of the development process at 37signals. So my first move was to order a stack of books about QA to help figure out what the hell I was supposed to be doing.
It’s been almost two years since our first project “with QA” back in 2012. Ann Goliak (another support team alumnus) recently joined me at the stead. Our QA process isn’t traditional and goes a bit different for every feature. Here’s a look at how QA fits into our development process, using the recent phone verification project as an example.
Step 1. I sat down with Sam Stephenson back in early July for our first walkthrough of phone verification. Hearing Sam talk about “creating a verification profile” or “completing a verification challenge” familiarized me with the terminology and flows that would be helpful descriptors in my bug reports. Here’s what the notes look like from that first conversation with Sam.
Step 2. After the introduction I’ll dive right into clicking around in a staging or beta environment to get a feel for the feature and what other parts of the app it touches. This is often the first time that someone not designing/coding the feature has a chance to give it a spin, and the fresh perspective always produces some new insights.
Step 3. There are lots of variables to consider when testing. Here are some of the things we keep in mind when putting together a test plan:
- Does the API need to be updated to support this?
- Does this feature affect Project templates?
- Does this feature affect Basecamp Personal?
- Does our iPhone app support it?
- Do our mobile web views need to be updated?
- Does this impact email-in?
- Does this impact loop-in?
- Does this impact moving and copying content?
- Does this impact project imports from Basecamp Classic?
- Test at various BCX plan levels
- Test at various content limits (storage, projects)
Project states
- Active project, Archived project, Project template, Draft (unpublished) project, Trashed project.
Types of content
- To-do lists, To-do items (assigned + unassigned + dated), Messages, Files, Google docs, Text documents, Events (one time + recurring).
Views
- Progress screen, In-project latest activity block, History blocks (for each type of content), Calendar, Person pages, Trash, Digest emails.
When these variables are combined you end up with a script of tasks like this one to guide the testing. These lists are unique for each project.
Step 4. In Basecamp we make a few QA-specific to-do lists in each project: the first for unsorted discoveries, a second for tasks that have been allocated, and a third for rough spots support should know about (essentially “known issues”).
When I find a bug I’ll make a new to-do item that describes it including: 1) A thorough description of what I’m seeing, often with a suggested fix; 2) Specific steps to recreate the behavior; 3) The browser(s) and/or platform(s) where this was observed; and 4) Relevant URLs, screenshots, or a screen recording.
We use ScreenFlow to capture screen recordings on the Mac, and Reflector to do the same in iOS. We’re fans of LittleSnapper (now Ember) for annotating and organizing still screenshots.
Step 5. The designer and programmer on the project will periodically sift through the unsorted QA inbox. Some items get moved to the QA allocated list and fixed, then reassigned to QA for verification. Other “bugs” will trigger a conversation about why a decision was intentional, or outside the scope of the iteration.
Step 6. Before each new feature launch, QA hosts a video walkthrough for the support team. We’ll highlight any potential areas of confusion and other things to be on the lookout for. After the walkthrough, a member of support will spend some time putting together a help section page that covers the new feature.
Step 7. Within a couple weeks after a feature launch the team will usually have a retrospective phone call. We talk the highs and lows of the iteration and I use the chance to ask how QA can be better next time around.
At the end of a project there are usually some “nice to haves” and edge-cases that didn’t make the pre-launch cut. These bugs get moved into a different Basecamp project used for tracking long standing issues, then every few months we’ll eradicate some of them during a company-wide “bug mash”.
So that’s a general overview of how QA works at 37signals. We find anywhere from 30-80 bugs per project. Having QA has helped reduce the size of our on-call team to one. The best compliment: After trying it out, no one at the company was interested in shipping features without dedicated QA.
Nancy
on 11 Dec 13Just to be clear, does the “we” mean the QA team?
Said another way, are you saying the QA team finds 30-80 bug prior to the feature even being released to production?
MB
on 11 Dec 13@Nancy correct. Looking at some past projects, the QA team has found between 30 to 80 bugs before the respective feature shipped. These varied from small browser-specific quirks to bigger usability questions.
Andrew Spiers
on 11 Dec 13I come from a mainframe background where things like this are more formalized.
The usual process goes;
1. Business Analysis produces specification. 2. Programmers code and perform unit testing 3. QA team test using specification from step 1
Bugs raised in step 3 are passed back to the programmers for fixing. Any queries on functionality are passed back to the BA for clarification.
Your current methodology seems to rely on conversations with the programmers. You are therefore starting the QA process with a ‘tainted’ view of the functionality. What if during your conversation the programmer tells you about a feature which is in fact a bug. The only point of reference you appear to have is back to the programmer who wrote the code.
MB
on 11 Dec 13@Andrew It’s not uncommon for QA to pushback or question the direction of a feature during the introductory walkthrough with the programmer (and/or designer). The programmers and designers on a project contribute equally to the process of deciding the specifications of a feature – what it should do and how that should look.
Recently we’ve started including one member of QA on each project team from the start so their “customer perspective” can help shape a feature in its early stages. Then a second member of QA with a fresh set of eyes is brought in later on.
Andrew Spiers
on 11 Dec 13@MB. Thanks for your reply.
My comment was more that the initial walkthrough should only be with the designer and not the programmer (the programmer should also be at the walkthrough).
This way everyone should be aware of the design of the particular task. Whilst the programmer is coding, the QA team then get on with writing their test plans / scripts / routines. Programmers can influence QA teams to test to what they have coded rather than what has been designed.
If you walkthrough with the programmer rather than the designer, you are getting the programmers version of the specification not the designers.
The lines get blurred when your programmers also do the design. I am used to having the two functions separated.
Alister Scott
on 11 Dec 13There’s no mention of automated tests in this article.
Is this intentional? Do you do any automated tests as a QA at 37signals?
Are you involved in collaborating on automated tests with the programmers?
MB
on 11 Dec 13@Alister At the moment all of the testing that QA does is manual though we’re interested in supplementing that with some automation in the future. I recently started playing around with Selenium and MonkeyTalk.
Alister Scott
on 11 Dec 13@MB: cool, thanks for your reply. I write a fair bit about Selenium and Appium (Selenium for apps) on my blog. You should check it out sometime: http://watirmelon.com/
Alister Scott
on 12 Dec 13@MB: cool, thanks for your reply. I write a fair bit about Selenium and Appium (Selenium for apps) on my blog. You should check it out sometime: http://watirmelon.com/
Emil
on 13 Dec 13It would be interesting to see this as a public Basecamp project.
“The designer and programmer on the project will periodically sift through the unsorted QA inbox. Some items get moved to the QA allocated list and fixed, then reassigned to QA for verification.”
Are you dragging to-dos between lists with 80+ items? I love this discussion parts of different to-dos when working with bugs, but the organisation is always a bit of a hassle.
JZ
on 13 Dec 13“Are you dragging to-dos between lists with 80+ items?”
No way! QA and bug-squashing happen at the same time. While we may see a total of 30-80 findings in the course of a project they don’t arrive all at once and our developers are actively working through the list while QA is still in-progress.
The whole process can take from a few days to a few weeks on larger projects. Working concurrently makes it more manageable and ensures the developers aren’t idle while testing is in-progress.
Jared
on 13 Dec 13This broken process might explain why there are so many bug fixes, even in productions.
http://37signals.com/changes
Jeff
on 14 Dec 13What is Graceland?
Michael
on 14 Dec 13Thanks, Michael. Our company has a wonderful proofreader and but report system but we’re looking to take QA to a much more practive, thorough place (along with automated tests.) Reading through experiences like this helps.
Dov Harrington
on 15 Dec 13This was an incredibly well timed post. I’m the PMO director and most recently hired a dedicated QA resource for our web related projects. Where is the best place to start is a frequent question in my head. We started using a product called TestRail to organize all of our test cases and where applicable link them through noted IDs back to their corresponding work items in TFS. I’d love to read more about this process or to talk to others going through similar QA Department ‘startup’ scenarios.
Helen
on 16 Dec 13Great content! But hard to read with each paragraph indented.
This discussion is closed.