A few of us recently attended Velocity Conference in San Jose, CA. In the “hallway track sessions” a number people asked about the hardware that powers Basecamp, Campfire and Highrise.
Application Servers
All of our Ruby/Rails application roles run on Dell C Series 5220 servers. We chose C5220 servers because they provide high density, high performance, and low cost compute sleds at a decent cost point. The C5220 sleds replaced invidual Dell R710 servers which consumed a greater amount of power and rack space in addition to offering expandability we were not utilizing.
We use an 8 sled configuration with E31270 3.40GHz processors, 32/16G of ram, an LSI raid card and 2 non Dell SSDs. (For those of you thinking of ordering these … get the LSI raid card. The built in Intel raid is unreliable.) Each chassis with 8 sleds takes up 4U of rackspace: 3 for the chassis and 1 for cabling.
Job / Utility Servers
We use a combination of C6100 and C6220 servers to power our utility/jobs and API roles. We exclusively use the 4 sled version (of each) which means we get 4 “servers” in 2U. Each sled has 2x X5650 processors, 48-96G of ram, 2-6 ssds, and 4×1G or 1×10G network interfaces. This design allows to have up to 24 disks in a single chassis while consuming the same space as a single R710 server (which holds 8 disks max).
Search
For Solr we run R710s filled with SSDs. Each instance varies, but a common configuration is 2x E5530 processors, 48G of ram, 4-8 ssds, and 4×1g network interfaces. For Elastic Search we run a mix of Poweredge 2950 servers and C 5220 sleds with 12-16G of ram and 2×400G ssds in a raid 1.
Database and Memcache/Redis Servers
For Database roles we use R710s with 2x X5670 processors, 1.2TB Fusion-IO duo cards and varying amounts of memory. (Varies based on the database size.) We also have a number of older R710s powering Memcache and Redis instances. Each of these has has 2x E5530 processors and 2-4 disks with 4×1G network interfaces.
Storage
We have around 400TB / 9 nodes of Isilon 36 and 72NL storage. We serve all of the user uploaded content off this storage with backups to S3.
OS Choice
Database servers run RHEL or Centos 6 while application and utility servers run Ubuntu LTS.
MI
on 10 Jul 12What do you see in terms of typical power consumption on the C5220 and C6220 nodes?
Rafael
on 10 Jul 12Why not use the same OS all over the place? Since they’re just flavors of linux the difference is negligible.
Kyle West
on 10 Jul 12Where is all this and how much space does it take up? Have any pics? Downside of cloud/managed hosting is we can’t fulfill our hardware porn needs ;)
Nic
on 10 Jul 12Yes, why 2 different flavors of Linux ?
mike
on 10 Jul 12how many servers do you have in total? and, if you can say, how much load do you manage (like rpm/ppm newrelic data)?
Anon
on 10 Jul 12Would be good to see this information in more details, for people like me who have poor hardware background and would like to improve, not sure how company confidential it is.
Farsi dictionary
on 11 Jul 12That’s amazing. How many servers do you have? how many database servers?
Phil
on 11 Jul 12Coming from a country that has experience a lot of earthquakes recently…. What happens if the building that this is all housed in falls over, will Basecamp still work?
Rafael Rosa
on 11 Jul 12Hi,
I remember an old post1 where you guys mentioned that you used a MySQL appliance, looks like you’re not using it anymore. Could you comment on why did you switch?
Thanks [1] http://37signals.com/svn/posts/2479-nuts-bolts-database-servers
Ed
on 11 Jul 12Thanks for this insight. I’m interested to know why you guys choose to host your own machines, rather than use cloud services. Is there any reason for doing it yourselves rather than say, AWS?
Charles
on 11 Jul 12What do you use for load balancing?
Will J
on 11 Jul 12@Ed: Price and performance mostly
@Charles: HAProxy
Ryan
on 11 Jul 12@Will J would you ever consider using Nginx as an LB? Is there something specifically that HAProxy does for you that Nginx can’t?
DaveB
on 12 Jul 12Yeah, very interested into why you use different flavours of Linux on the different server. What’s the gain?
Tom
on 12 Jul 12You mention SSDs (aside from the FusionIO cards which are amazing), which I assume including the cards are either raid 1 or 10’ed together.
Is there a particular model/vendor you’re going for outside the FusionIO cards, and I assume you’re staying SLC (for the write life)?
John
on 12 Jul 12How do you manage your hardware inventory? I work for a small company and we currently use a spreadsheet, but the number of servers we have is growing to a point where the sheet is becoming unmanageable. Thanks.
EH
on 13 Jul 12Props for the hardware. Now I’d like to see someone do the calculations for what this would be in EC2 terms.
chili01234
on 16 Jul 12網上賭場 オンラインカジノ
Paul
on 16 Jul 12Regarding S3, Did you folks write your own routine to backup the data to S3 or are you using their AWS Import/Export routine? We are thinking of doing something like this instead of backing up our data to local costly data domain units.
Taylor
on 16 Jul 12@Rafael @Nic @DaveB – We’ve run into too many bugs on Ubuntu LTS which resulted in down time. We’ve moved our databases off Ubuntu because of this.
@Ed – To add to what Will said: Control, steady growth without the need for high elasticity.
@Tom – We use various Crucial, Dell, Intel, Mushkin, and probably one or two others I’m forgetting. Always in some type of raid configuration… usually raid 1 or raid 10.
@John – We use our wiki. Not a great solution.
@Paul – It’s built in to each application through a common plugin for targeting storage (nfs, s3, local).
This discussion is closed.