Yesterday David and I met with Sarah and Michael for a bit to get an update on how customer support/service is going. We recently switched from using Gmail to HelpSpot and we were curious how the transition felt. Basically, how was the workflow and were we learning anything?
Why the switch?
One of the reasons we switched to HelpSpot was so we could do a better job tracking which requests and issues were top requests. Sometimes support will say “People have been having a hard time uploading files this week” but it’s hard to know what “people” means. Is it two people? 10 people? Dozens? If we made changes to the app, would we reduce support demands and customer frustration? Gmail couldn’t really give us specifics, and HelpSpot could, so we switched to HelpSpot.
Knowing but not learning
In our review yesterday we discovered that were were tracking everything in detail, but not really learning anything. Why? We were tracking for the sake of tracking, not tracking for the sake of learning. We weren’t really sure why we were tracking what we were — but we kept on doing it because, well, momentum is a powerful force. It became an exercise in seeing how organized we could get in spite of what we actually needed.
Our extensive use of categories and tags and custom fields and pulldowns could give us a whole lot of report-friendly information, but it didn’t give us any useful information. Information without insight is junk. That’s what we had. Plenty of it.
Going back to simple
So yesterday we decided to change everything. Let’s point the ship towards simple. Every mistake we’ve made as a company has been because we tried to do too much, not because we didn’t do enough. So let’s apply that lesson to how we track support requests too.
What really mattered?
Instead of neatly categorizing every request, we’d just roughly categorize them. So instead of multi-level categorizing like “Milestones > Editing > How to move milestones between projects” we’d just track the “How do move milestones between projects” part. The “Milestones” and “Editing” categories didn’t matter. We didn’t need the hierarchy or extensive organization. All that mattered was the bottom line: The question/issue.
Basically as questions/issues came in, we’d create new long tags that paraphrased the question/issue. And whenever another question/issue came in that was roughly the same as the paraphrased question, we’d tag the actual question with the paraphrased question. This way we could get a count on these paraphrased questions and see how many people were basically asking “how can I update my password” or “how do I move information between projects?”.
We could run a report that would simply give us the top 10 questions this week. Are they the same as last week’s top 10? Are we seeing a pattern? What’s up? What’s down? Now we have specifics that we can act on. In the past we’d know there were 60 questions in the “milestones” tag, but that doesn’t really give us anything to act on. But now we’d know there were 23 questions about “How do I add more than 10 milestones at a time”, 21 about “Can I move milestones between projects?”, and 16 about “Can I add times to milestones?”. Now we’ve learned something.
Obvious isn’t always obvious
Looking back at this it seems obvious. We should have done this from the start. But like many things, it’s easy to get carried away. This new tool gave us all sorts of tracking options. Categories, tags, custom fields, lookups, etc… So we got excited and confused enthusiasm with priority. We did a lot of busy work but didn’t learn anything.
So just a reminder: Know what you’re measuring. Data for the sake of data can be a fun intellectual exercise, but practicality is usually what you’re after.