The majority of Ajax operations in Basecamp are handled with Server-generated JavaScript Responses (SJR). It works like this:
- Form is submitted via a XMLHttpRequest-powered form.
- Server creates or updates a model object.
- Server generates a JavaScript response that includes the updated HTML template for the model.
- Client evaluates the JavaScript returned by the server, which then updates the DOM.
This simple pattern has a number of key benefits.
Benefit #1: Reuse templates without sacrificing performance
You get to reuse the template that represents the model for both first-render and subsequent updates. In Rails, you’d have a partial like messages/message that’s used for both cases.
If you only returned JSON, you’d have to implement your templates for showing that message twice (once for first-response on the server, once for subsequent-updates on the client) — unless you’re doing a single-page JavaScript app where even the first response is done with JSON/client-side generation.
That latter model can be quite slow, since you won’t be able to display anything until your entire JavaScript library has been loaded and then the templates generated client-side. (This was the model that Twitter originally tried and then backed out of). But at least it’s a reasonable choice for certain situations and doesn’t require template duplication.
Benefit #2: Less computational power needed on the client
While the JavaScript with the embedded HTML template might result in a response that’s marginally larger than the same response in JSON (although that’s usually negligible when you compress with gzip), it doesn’t require much client-side computation to update.
This means it might well be faster from an end-to-end perspective to send JavaScript+HTML than JSON with client-side templates, depending on the complexity of those templates and the computational power of the client. This is double so because the server-generated templates can often be cached and shared amongst many users (see Russian Doll caching).
Benefit #3: Easy-to-follow execution flow
It’s very easy to follow the execution flow with SJR. The request mechanism is standardized with helper logic like form_for @post, remote: true
. There’s no need for per-action request logic. The controller then renders the response partial view in exactly the same way it would render a full view, the template is just JavaScript instead of straight HTML.
Complete example
0) First-use of the message template.
<h1>All messages:</h1>
<%# renders messages/_message.html.erb %>
<%= render @messages %>
1) Form submitting via Ajax.
<% form_for @project.messages.new, remote: true do |form| %>
...
<%= form.submit "Send message" %>
<% end %>
2) Server creates the model object.
class MessagesController < ActionController::Base
def create
@message = @project.messages.create!(message_params)
respond_to do |format|
format.html { redirect_to @message } # no js fallback
format.js # just renders messages/create.js.erb
end
end
end
3) Server generates a JavaScript response with the HTML embedded.
<%# renders messages/_message.html.erb %>
$('#messages').prepend('<%=j render @message %>');
$('#<%= dom_id @message %>').highlight();
The final step of evaluating the response is automatically handled by the XMLHttpRequest-powered form generated by form_for, and the view is thus updated with the new message and that new message is then highlighted via a JS/CSS animation.
Beyond RJS
When we first started using SJR, we used it together with a transpiler called RJS, which had you write Ruby templates that were then turned into JavaScript. It was a poor man’s version of CoffeeScript (or Opalrb, if you will), and it erroneously turned many people off the SJR pattern.
These days we don’t use RJS any more (the generated responses are usually so simple that the win just wasn’t big enough for the rare cases where you actually do need something more complicated), but we’re as committed as ever to SJR.
This doesn’t mean that there’s no place for generating JSON on the server and views on the client. We do that for the minority case where UI fidelity is very high and lots of view state is maintained, like our calendar. When that route is called for, we use Sam’s excellent Eco template system (think ERB for CoffeeScript).
If your web application is all high-fidelity UI, it’s completely legit to go this route all the way. You’re paying a high price to buy yourself something fancy. No sweat. But if your application is more like Basecamp or Github or the majority of applications on the web that are proud of their document-based roots, then you really should embrace SJR with open arms.
The combination of Russian Doll-caching, Turbolinks, and SJR is an incredibly powerful cocktail for making fast, modern, and beautifully coded web applications. Enjoy!
Dragan Sahpaski
on 10 Dec 13Same approach that we use in our company (we use java with tapestry) using JavaScript ajax callbacks. The approach suppressed me allot – because to me it seemed more convenient (we’re more used to server side programming) than optimal.
Joe
on 10 Dec 13Hasn’t this basically been the approach used by Rails apps all along (or I guess the move away from RJS is new)?
DHH
on 10 Dec 13Joe, yes, lots of Rails apps use this approach. But there seemed to be much confusion about how this approach related to RJS, and even some even more mistaken belief that this approach was either deprecated or undesirable. When it’s so most certainly neither. This post is mostly just a reaffirmation.
Joe
on 10 Dec 13Thanks David. Well said.
Maurizio
on 10 Dec 13Isn’t it better to use render :json and manage controller responses through the AJAX responses in the assets? In this way you keep all your client side code inside the assets, instead of spread it all around your views
DHH
on 10 Dec 13Maurizio, no, it’s not better. See article above for arguments.
Chris Oliver
on 10 Dec 13How about security and error handling? I know error handling tended to cause my js.erb responses to become a little more complex than I wanted and Egor Homakov wrote about security implications with using RJS type code: http://homakov.blogspot.com/2013/05/do-not-use-rjs-like-techniques.html
Curious as to how you approach those two aspects of it.
DHH
on 10 Dec 13The security issue has a patch in the pipeline or you can just check the response for xhr? for now.
For error handling, you return errors as JS code as well. As well as doing something generic for network issues.
Simon Starr
on 10 Dec 13This is the technique I usually use but I’m never sure of the best way to handle validation for Ajax forms. Do you have any recommendations?
In the past I’ve often used Brian Cardarella’s client side validations but that’s no longer maintained and, as convenient as it is, it feels a bit like cracking a walnut with a sledgehammer.
Jamie Hill
on 11 Dec 13I’ve been using this appoach for a while, love the simplicity and partial reuse; thanks for outlining it in a blog post.
When it comes to situations like the calendar example or anything where a round trip to the server is undesirable, I have a ‘blueprint’ helper that renders a partial for use in a data attribute (on a ‘ul’ for example). The helper essentially replaces all newlines and uses a variant of the null object pattern to populate the partial with placeholder data. With this in place, I can then just grab the partial/blueprint from the data attribute on the client side, replacing the placeholders where necessary (by just finding the elements and setting their text) and appending it to the DOM.
This appoach has worked great for me; I’ll outline it properly in a blog post when I get a moment.
Chip
on 11 Dec 13Provocative… I actually prefer a JS-MVC, but will concede that there is a ton of duplication: [:templates, :validations, :schema, :model-logic, etc]. Here are my reasons:
From what I’ve seen JS-MVCs tend to be better tested than their DOM hijacking counterparts. Front and backend code is unit tested in isolation, and the (JSON) interface is well defined. I find it easier to follow the code, and understand each MVC’s responsibilities.
Secondly I love the power of a ‘smart-client’. JS-MVCs aren’t just fast but flexible in when/how they access data. Data can be persisted offline (local storage), written in batches, or queued for processing (eventual consistency) – without impacting the user’s experience.
Lastly… let the user pay for comptuer-cycles! JS-MVCs are distributed systems that (whenever possible) offload the storing, processing, and presentation of data. These systems scale well (consider the combined capacity of your user’s laptops), and shift operating cost to the end-user.
That said... Both patterns have their use, and SJR is often the better fit. DHH - Thanks for the reminder!Norman
on 11 Dec 13We use this approach at my company, and for the most part I’m quite happy with it. At times it can be a bit difficult to debug errors in the generated Javascript though. I’d be curious to know what you’re using a 37Signals to test code that follows this pattern, other than the regular test/unit integration that comes with Rails.
Sean Griffin
on 11 Dec 13While there’s certainly something really nice about the simplicity of just adding `remote: true` to a link or form, and having magic happen, I feel that it can quickly lead to madness. Specifically as soon as there’s ever a conditional in the logic, things can get ugly. It’s impossible to unit test these in any meaningful fashion. Having to write an expensive JS acceptance test for minor conditionals in the AJAX response just feels so wrong, and quickly slows the test suite to a crawl.
homakov
on 11 Dec 13Technique has its profits if used securely with mitigated leakings. Which you kinda didn’t mention, like there was nothing wrong with it, hah.
Matt De Leon
on 11 Dec 13I think it’s worth noting that this technique requires a certain discipline in design. Without the constraint of SJR (i.e. using JSON responses and client-side templates), you can easily find yourself designing a page with all sorts of updates after a form is submitted. In my opinion, the constraint on design is worth the ease of implementation and maintenance.
k
on 11 Dec 13Unfortunately this technique is insecure-by-default. I would not advise it, unless you want to add a check for request.xhr? to all of your controller actions (something not mentioned in the post). And then you can’t include your own view-javascripts as
Your points are spot on however. Server-side templates are a major sacrifice you must make when you want a complex client app. You do lose an easy-to-follow logic flow. I see this technique as being very useful for lightly-dynamic web applications – the loss of closures is staggering, that’s pretty much a js dev’s only tool. Some people compromise and return HTML templates as JSON, but that’s pretty ugly too (but at least it’s secure!). In the end if you want complex, reusable client-side behavior the only choice is a client-heavy stack.
kadaj
on 11 Dec 13If you are using say dust.js for templating, the dust.js compiler can render a template with the necessary data either on the client side or on the server, which then will produce JavaScript. Same approach here with different technology stack. These days why would anyone render everything on the server when the client is powerful enough. Isn’t that simply more load to the server which could be off loaded to the client unless you have some valuable algorithms you don’t want others to see?
Mitja
on 11 Dec 13You can easily use the same templates on server & client side. Dust.js, google closure templates, mustache are just some of the examples. Server playing a REST api in this story can also be easily tested, and provides the same gateway which you can expose to your customers (limited by authentication). It always depends on your needs, but we should accept that single-page apps are here to stay and dominate!
PHT
on 11 Dec 13Slightly related question : do you know of anyone who managed transitionning from client-side rendering to server-side rendering without requiring a big-bang rewrite ? I’m wondering what the strategy would be…
Feng
on 11 Dec 13I’m wondering how could we handle the cases where multiple-front-html-regions needs to be updated with one single ajax call? Should this be handled in the server side (like quora’s webnode/livenode) or should it be handled in the client side?
Rodrigo Rosenfeld Rosas
on 11 Dec 13Where did my comment go?
Jamie Hill
on 11 Dec 13Have to say, I’m a little concerned about the security thing… when is the patch due and how do you make use of request.xhr? to make things safe for now?
Mgk
on 11 Dec 13I’m very new at Rails, still reading and trying to learn it. I really liked this approach. I’ve a question on your example, when the Message object has a validation error, how do you handle it? Do you send back errors as json (and process it on client), or use SJR to display error messages on the form elements? Btw, thank you for this awesome framework David!
Ken
on 12 Dec 13Simplicity & Efficiency
Please keep Rails most efficient web development tool, other tools will be compromised.
Tom Davies
on 12 Dec 13@Mgk – Validation errors would be rendered as HTML and injected into the DOM the same way he used in the render @message example. For example, you can just re-render the entire form including the validations errors / messages.
@Jamie Hill – I believe you just need to add an “if request.xhr?” in your controller before rendering the .js response for now to protect yourself. The exploit for this is not able to set the XHR headers so this would protect your .js endpoint from being called outside of your code.
JH
on 13 Dec 13@Jamie Hill: Here’s an example of the code we’re using in Basecamp to prevent cross-origin JavaScript requests: https://gist.github.com/javan/7725255
Pouya Arvandian
on 13 Dec 13I embraced SJR with open arms, and never regretted. I particularly acknowledge the point David made about “easy to follow execution flow”. Using this approach, you follow the same conventional MVC flow, just that instead of spitting out HTML, you return Javascript. In my experience, this results in a much happier and more productive developer, a much more maintainable code, and is less distracting to the developer (you are not making a major context switch, especially now that Coffeescript is default in rails). Whereas, [premature] usage of client side MVC, means that you end up with an MVC on the backend, and an MVC on the front (therefore, complexity = MVC^2). And the more complexity is introduced, the harder it gets to kill bugs, maintain code, etc.
So, embrace this approach, folks! :)
Maksim Chemerisuk
on 14 Dec 13I’m starting to catch more and more posts about the server-side based HTML generation, and this is good :) Actually we came to it as well.
@David I’d like to add several advantages to your list. In real apps with MVC on client it’s often required to create kinda “batch” responses for complex pages to decrease number of requests. Splitting such huge json on several data pieces for different models could be a quite complex task. With server-side generation you do not have such problem. You could make app more secure as well because have trustable access to all security services on backend.
@Feng what I use in better-ajaxify is marking updatable containers with the special data-ajaxify attribute. I guess some variation of the approach could be used.
@Mgk for an invalid form I returned json with errors on one my project, so it used to display messages on client-side. It works nicely on full-ajax websites.
Matt De Leon
on 16 Dec 13@Feng @Maksim I think you might be missing part of the point by using a solution like data-ajaxify. If you have a page that needs to update several DOM containers on form submission, you might consider rethinking the design. Not saying that every page must have only one updatable part, but it’s a potential design “smell” if there are more.
You’ll be happy if you find simple designs that require simple server-side updates.
This discussion is closed.