As we detailed in Basecamp was under network attack, criminals assaulted our network with a DDoS attack on March 24. This is the technical postmortem that we promised.
The main attack lasted a total of an hour and 40 minutes starting at 8:32 central time and ending around 10:12. During that window, Basecamp and the other services were completely unavailable for 45 minutes, and intermittently up and down or slow for the rest. In addition to the attack itself, Basecamp got put in network quarantine by other providers, so it wasn’t until 11:08 that access was restored for everyone, everywhere.
The attack was a combination of SYN flood, DNS reflection, ICMP flooding, and NTP amplification. The combined flow was in excess of 20Gbps. Our mitigation strategy included filtering through a single provider and working with them to remove bogus traffic.
To reiterate, no data was compromised in this attack. This was solely an attack on our customers’ ability to access Basecamp and the other services.
There are two main areas we will improve upon following this event. Regarding our shield against future network attacks:
- We’ve formed a DDoS Survivors group to collaborate with other sites who’ve been subject to the same or similar attacks. That’s been enormously helpful already.
- We’re exploring all sorts of vendor shields to be able to mitigate future attacks even faster. While it’s tough to completely prevent any interruption in the face of a massive attack, there are options to minimize the disturbance.
- Law enforcement has been contacted, we’ve added our statement to their case file, and we’ll continue to assist them in catching the criminals behind this attack.
Regarding the communication:
- There was a 20-minute delay between our first learning of the attack and reporting it to our customers via Twitter and status. That’s unacceptable. We’ll make changes to ensure that it doesn’t take more than a maximum of 5 minutes to report something like this again.
- Although we were successful at posting information to our status site (which is hosted off site), the site received more traffic than ever in the past, and it too had availability problems. We’ve already upgraded the servers that power the site and we’ll be conducting additional load and availability testing in the coming days.
We will continue to be on high alert in case there is another attack. We have discussed plans with our providers, and we’re initiating new conversations with some of the top security vendors.
Monday was a rough day and we’re incredibly sorry we weren’t more effective at minimizing this interruption. We continue to sincerely appreciate your patience and support. Thank you.
Jan
on 26 Mar 14s/with some the top security vendors/with some of the top security vendors
Bernd Goldschmidt
on 26 Mar 14Can you share any details on the attackers and what they requested?
Sebastian
on 26 Mar 14Great communication during the attack, great post mortem. These things happen and you handled it very well! Also very nice to see you are striving to be even better prepared in the future!
Miha Rekar
on 26 Mar 14You have nothing to apologize for! You communicated great via Twitter and Gist and this I think is much more than anyone else would under similar circumstances.
GaryB
on 26 Mar 14Well, you have truly arrived.
As the ex-CTO of a Bank, and a ‘C’ level for organisations based in ‘offshore’ islands financial institutions, I am more familiar that I want to be with DDoS attacks, and the ‘all hands to the pump’ emergency response that is required. It means you have been noticed as a successful organisation worth blackmailing.
Congratulations.
And the way you have handled it on your blog – brilliant.
Shane
on 26 Mar 14As a very satisfied Basecamp customer I can only speak for myself, that being said… You guys rock!
Susan Fennema
on 26 Mar 14You guys rocked it Monday. These things cannot be helped, but your reaction to the situation was truly appreciated. Nice job.
Douglas III
on 27 Mar 14This is why I love Basecamp and 37Signals. It’s not that problems don’t happen, they do. Rather, I love the transparency of the team, the responsiveness of the actions, and plans as to how to minimize it in the future.
Keep up the great work guys !
- D3
Marko
on 27 Mar 14I didn’t even noticed it happened, and we use Basecamp pretty heavily at work…
This kind of communication with customers is very nice and respectful.
Keep up the good work!
Todd T.
on 28 Mar 14Bummer you had to go through the outage, but there’s also good coming from it- your infrastructure is improving, you’ve had to work together in adversity to bond as an organization, people saw the need and value your service provides and you’re working to make it all better.
Kudos to you and your staff like always, David. Appreciated the Twitter updates in the midst as well as blog posting to keep us informed. Hope they catch the culprits!
Ned
on 31 Mar 14I’d be interested in hearing what Basecamp’s (or reader’s) workflow through large production issues.
I work for large technology company we have many ‘all hands on deck’ problems. Currently our solution is send a mass email and have everyone (sometimes 50+) join a conference call and trying talking the issue out. Often the issue will run several hours so we different people join/leave the call so there are constant interruption of catching people up of the status and what troubleshoot already been done.
I have to believe there is a better tool/method for working out these large issues.
Michael
on 31 Mar 14Ned, maybe Basecamp? ;(
Alex Jones
on 01 Apr 14The best to save yourself from DDoS attacks to have in place an all time single filter, any bogus traffic or an unprecedented rise in traffic raises the alarm bells. My own website Tutorialspark was subject to that kind of attack.
This discussion is closed.