From the very start, we wanted Basecamp Next to be fast. Really, really fast. To do so we built a russian-doll architecture of nested caching that I’ll write up in detail soon. But for now I just wanted to share where all this caching is going to live as we just installed it at the hosting center.
It kinda reminds me of what pictures of a drug raid look like when they lay out all the coke and cash on the table, but this is what 864GB of RAM looks like:
Cost of the loot was $12,000.
Bruno Bernardino
on 27 Jan 12o.O
Christophe
on 27 Jan 12Okay, that’s some serious stuff.
Nikolaos Dimopoulos
on 27 Jan 12Gone the days where this was the uber computer (16K RAM)
http://www.futurebots.com/altair.jpg
:D :D
Congratulations guys!
Michael Deering
on 27 Jan 12The one in the second row 5 in from the right is faulty. Just saying…
stJhimy
on 27 Jan 12So badass
Libo
on 27 Jan 12What about some Fusion-IO or equivalent for storage?
Brian Kenny
on 27 Jan 12Across how many servers will this be deployed?
Will Jessop
on 27 Jan 12@Libo: We use FusionIO in other places, but it doesn’t make cost sense to use FusionIO for a cache that works well in memory as these sticks are being used for.
@Brian Kenny: Three initially.
Brandon Edling
on 27 Jan 12You couldn’t make it to a terabyte? Really? :)
Wowsers
on 27 Jan 12How does 54 sticks of ram add up to 840GB evenly?
Dave Cahill
on 27 Jan 12Damn, that’s a serious collection of chips. What kind of gear is it being packed into, dare I ask?
John
on 27 Jan 12Libo
We really like the Fusion-IO cards that we are running in our database servers but in this case it was more cost effective to use memory since we are spreading this across 3 servers.
Jermaine
on 27 Jan 12From running on 1 server at ~ $199/m (in year one), to this! Sweet stuff, congrats!
Robin
on 27 Jan 12That’s actually 864GB of RAM on the table there. Your $12000 price tag is a bit steep even if it was from one separate purchase for each module (no volume discount), so I wonder… what memory modules are those? The cheapest 16GB DIMMs (non-ECC etc.) I can buy here are just under $150 a piece, resulting in just over $8000 for 864GB of RAM.
Michael
on 27 Jan 12That is badass.
Will Jessop
on 27 Jan 12@Wowsers, @Robin: well spotted, post updated.
Eoin
on 27 Jan 12Is it going in a RAM disk array!?
John
on 27 Jan 12Robin
The RAM is PC2-10600 dual-ranked ECC.
Michael Halpin
on 27 Jan 12Ever think of just pushing the hosting to the cloud?
Andrew
on 27 Jan 12@Robin, when you buy 864GB of RAM, you’re going to get the ECC (and probably also registered) stuff. I can’t think of any sane reason you wouldn’t, especially for server work loads.
Registered ECC DDR3 16GB DIMMs from Crucial, with no volume discount comes close to $400 each. $12k is reasonable.
Jigar
on 27 Jan 12Just checked AWS Elastic Cache’s pricing.
It would cost around 20k / month for ~ 800 GB of Memory. (12 * Quadruple Extra Large nodes / 68 GB)
Will Jessop
on 27 Jan 12@Michael Halpin: We don’t need that flexibility at this point. We get better performance for less money on our own bare metal.
Vance Lucas
on 27 Jan 12Hello, 2016! This is what a measly sum of RAM looked like way back in the ancient computing days of 2011. I know by now you have 1TB in a single stick, but don’t poke too much fun looking back on us now. After all, 864GB is more than anyone will ever need.
Michael Halpin
on 27 Jan 12Will Jessop Jigar wow, just saw http://twitter.com/#!/nphase doing the sums on EC2: $17k per month! well, that puts that into perspective!
Russell Smith
on 27 Jan 12@will / @michael it would be more than $20K/mo after you take in to account inter-zone b/w (assuming you went that way).
How many boxes is this going to?
Will Jessop
on 27 Jan 12@Russell Smith: Three
JuanM LF
on 27 Jan 12which makes me think… how do you calculate how much RAM you need to run a rails app…. haven’t been able to find any info on that….
I guess (obviously) it depends on the application (and concurrence) but, are there some sort guidelines for knowing how much resources should I get for an app???
any help here would be appreciatted!!!
Des
on 27 Jan 12If it actually cost $12K you should be reporting the “Street Value” as somewhere in the region of half a million dollars.
nas
on 27 Jan 12A properly scaled app in the cloud would not require that much memory so comparing prices isn’t accurate. Also, factor in the hidden costs of maintaining physical hardware and the cloud becomes much cheaper. Did I mention the the on-demand nature and convenience of the cloud? I’ll take instant auto-scaling over waiting for memory to arrive through physical mail any day.
Marcus
on 27 Jan 12Wish I had that much RAM in my ESXi server at home. :)
Curt
on 27 Jan 12Nas: Yep, the cloud magically removes the need for memory. What was 37 signals thinking.
Allan
on 27 Jan 12I hope you weren’t walking around on a carpet before you laid these out!
Cory
on 27 Jan 12Slow clap.
DHH
on 27 Jan 12Juan, all this RAM is for Memcache servers. It’s not related to Rails at all.
RDO
on 27 Jan 12You guys still do your own hardware infrastructure?! Cute.
Will Jessop
on 27 Jan 12@RDO: We used to have stuff “in the cloud” but at this stage hosting it ourselves on bare metal makes more sense.
@nas: If you can turn your secret sauce for reducing the memory footprint of an app when moving it to the cloud into a product, we’ll take a look.
Levi Figueira
on 27 Jan 12Why are people mixing RAM usage with caching? From the post I take it that the point of all this RAM isn’t to cover for Rails’ high mem usage but to cache directly in RAM… /cc @nas
Did I miss something?
Nick
on 27 Jan 12@nas: Cool, let me know when you have written something up on that so I can pass it to a couple of my co-workers who are dealing with scaling and multi-gigabyte database indexes in RAM on EC2.
Morley
on 27 Jan 12Cloud computing is to sysadmins today what viral marketing was to ad agencies five years ago.
Harry
on 27 Jan 12Ha, you bought RAM.
Linker3000
on 27 Jan 12Nicely laid out at an antistatic workstation by someone wearing appropriate ESD protection???
JuanM LF
on 27 Jan 12DHH: thanks for the clarification….
nevertheless my question is still un-answered… I’ve read many blogs which say that “ruby processes are expensive” and similar things….
but haven’t been able to come across some real down-to-numbers requirements,,,
maybe I’m just lacking experience in this thing but I really need some insight in this since I’m planning on buying a server to host some Rails instances…. thanks
jp mcgrady
on 27 Jan 12@JuanM LF
http://guides.rubyonrails.org/performance_testing.html
Phil
on 27 Jan 12If speed is at the top of your list, are you going to use jruby?
Barry M
on 27 Jan 12Can’t get it off the disk fast enough? SSD?
nas
on 27 Jan 12My point was that you can scale in the cloud with high memory instances. They are going to spread the memory across 3 physical machines anyway. @Will do you guys use your own virtualization or you’re deploying to these machines directly? People can do whatever they are comfortable with. I prefer the cloud because it forces me to architect my app expecting massive growth and potential failure and gives me the tools to scale when they happen. @Will how does 37signals plan to handle high traffic or a failure? Do I have to wait until you guys go buy another machine? These are important questions if I’m hosting my business data on your servers. Do you have data centers in multiple regions?
Mike
on 27 Jan 12Amazon has an app for that:
http://aws.amazon.com/elasticache/
Neil N
on 27 Jan 12@nas, I doubt BaseCamp will ever have a sudden need to scale. It’s not like a project will suddenly go viral like news article or youTube video could. In an app that requires accounts to be set up and user to log into a paid account, it is unlikely they will see a traffic spike they aren’t set up to handle.
Justice
on 27 Jan 12@Neil: Well said. How many times I have heard developer types just say how wonderful the cloud is without really looking into cost to benefit type situations. If you don’t need to massively suddenly scale (These kinds of websites still exist in great numbers… lots of stable traffic), the cloud will be more expensive for you than running a few of your own boxes.
If you get into unreliable traffic situations where you might go from 1k hits/min/sec to 10k rps during the day… the cloud is probably right for you.
Will Jessop
on 27 Jan 12@nas: We have a few VMWare nodes, but for some small utility servers. Everything else is bare metal.
Our traffic patterns are predictable and we wouldn’t save as much money turning cloud servers off during low traffic times as we do through the lower overall cost of our own bare metal, even though it’s racked and running 24/7.
Given our predicability of demand it makes a lot of sense for us to run our own hardware as cloud providers cost more than bare metal, and the extreme flexibility they provide isn’t required for us at this level (see https://twitter.com/#!/nphase/status/162920487431835648 for an idea of the cost for this always-on memcached server in EC2).
We handle failure by having redundancy in all our systems. All databases have replicas, we have excess capacity and spare servers. We don’t have geographic redundancy yet (though it’s being worked on), though we do have diverse lines out of the datacentre.
Justice
on 27 Jan 12@Will: And you guys are certainly under the same boat I/my company is.. plenty of stable traffic for which the cloud ends up being a cost burden. The only thing that I wish we had direct access to is what Rackspace is doing right now… allowing you to rent physical servers for your stable traffic and giving you a vlan link to their rackspace cloud system for more geo-redundancy type setups, etc. That is cool and something I wish more data-centers offered.
Will Jessop
on 27 Jan 12@Justice: I’ve toyed with the idea of using Amazon VPC for when we need throw-away servers (http://aws.amazon.com/vpc/). Not done any work on that yet though.
Bill
on 27 Jan 12wow. at least it’s 16Gb dimms. replace them with 32GB modules and then report the price ;)
nas
on 27 Jan 12@Will thanks for the explanation. A mix of local virtual, local raw, data centers, and cloud sounds pretty cool. Everyone has different needs and can mix and match as necessary. Thanks!
Rob Bergin
on 27 Jan 12And remember that Fusion IO accelerates Reads from disk not nearly as good for Writes – the RAM will accelerate both (MEMCache with alot of RAM vs. Fusion IO) – I bet on MemCache and the RAM.
Cisco’s UCS blades with 48 DIMMs (on the Gen1) and then 32 DIMM on the Gen2 have the most DIMM density.
Hansel Dunlop
on 28 Jan 12Cloud providers seem great for start ups that have no idea about their eventual requirements. But when you do know what you need the price and reliability of running your own hardware is hard to beat.
Peter
on 28 Jan 12A car is more expensive.. if you give employees cars why not give your company that virtual memory car to drive your business, the price is not high.
JP
on 28 Jan 12That’s a lot of ram.
Jon
on 28 Jan 12Couldn’t you just buy something like those OCZ pci express ssd drives? A 1.2TB model would only cost a quarter of what you spend on RAM.
Pixy Misa
on 28 Jan 12@Jon – that memory, across three servers, will deliver 60x the throughput of a top-of-the-line PCI-e SSD card.
danneu
on 28 Jan 12More importantly, I wonder how many cat.jpgs it’d take too fill that.
kc
on 28 Jan 12Nikolaos Dimopoulos: Altair was never the ubercomputer. The Alto and PDPs already existed.
Hudey
on 29 Jan 12Do you guys need any help in your data center? I’m jobless and this is the kind of shit that makes me feel like when I climb the rope in gym class!
Tony
on 30 Jan 12Good price ~$14/GB on the RAM. If high perf ssd is less than half, ideally should be a better value/io. Especially a high performing pcie solution that can do 100ks of iops – both read and write. You can’t acheive that value?
ss hinges
on 31 Jan 12thanks for the update ,we are looking forward to some good news from you in the future….
Osman
on 31 Jan 12Is that much RAM being used because of RoR ?
Merle
on 31 Jan 12@Osaman Yeah I think it is. Holy Shnikies
Klaus Wuestefeld
on 01 Feb 12Hey, you guys can even do System Prevalence with all that RAM.
Would all your data fit in 864GB?
Marcos
on 01 Feb 12@Osman @Merle As other commenters have mentioned, the post says that this RAM is for their “nested caching” not for running Ruby or Rails processes.
@Justice We implemented RackspaceConnect and it turned out being a horrible fit. A few hidden performance issues (fixed bandwidth between physical and cloud computers & underperformance of cloud computers) kept us from using our cloud servers the way we needed. Now we’re back on physical servers with cloud servers available for emergency scaling. It’s a very cool idea but do plenty of homework first to make sure it’s well suited for your use case.
This discussion is closed.