Using URLs instead of ID references in your APIs is a nice idea. You should do that. It makes it marginally more convenient when writing a client wrapper because you don’t have to embed URL templates. So you can do client.get(response[:person][:url])
instead of client.get("/people/#{response[:person][:id]}")
. But that’s about it.
The recurrent hoopla over hypermedia APIs is completely overblown. Embedding URLs instead of IDs is not going to guard you from breakage, it’s not going to do anything materially useful for standardizing API clients, and it doesn’t do much for discoverability.
Preventing breakage
According to hypermedia lore, you will be able to willy nilly change your URLs without needing to update any clients. But that’s based on the huge assumption that every API call is going to go through the front door every time and navigate to the page they need. That’s just not how things work.
If I want to request a message off a project in Basecamp, I would have to do something like this GET /projects
, GET /projects/1
, GET /projects/1/messages
, GET /projects/1/messages/2
. That’s great for the first fumbling in the dark discovery, but it doesn’t work as soon as I bookmark that last URL because I want to send comments to it later.
Just like bookmarks in the browser break if you change the URL, so will any client that’s stored a URL for later use.
Because breaking URLs is such a bad idea, people tend not do it. If you look at the successful APIs on the web, they’ve stayed remarkably stable because that’s the best way to prevent breakage. Like the W3C says: Cool URIs don’t change. Which means this isn’t much of a problem in the wild and even if it was, hypermedia APIs would still have big holes with direct links break.
Enabling discoverability
Good API docs explain what all the possible attributes of a resource are. They explain the possible values of those attributes. The options available and so forth. Thinking that we can meaningfully derive all that by just telling people to GET /
and then fumble around to discover all the options on their own just doesn’t gel with me.
How do I know that the data I happen to be stumbling across includes everything that’s possible to do? What if the project I request doesn’t have any documents attached with it? How do I know how to find those and how to add new ones?
Standardizing API clients
The idea that you can write one client to access multiple different APIs in any meaningful way disregards the idea that different apps do different things. Just because there’s a standard way to follow a resource to a sub-resource doesn’t mean that you can just write one generic client that then automatically knows how to work with any API.
Any generic API will not know what to do with the things it get. It won’t know the difference between Flickr photos and Basecamp projects. The assistance of being able to follow a link to go from photo to comments and project to messages is nice, but as explained in the beginning, not exactly earth shattering.
Every single application is still going to need a custom API client. That client needs to know what attributes are available on each resource and what to do with them.
In summary, here’s a low-fi solution to these three problems that doesn’t require a spec or involving the IETF:
- Don’t change your API URLs.
- Document your API.
- Provide a custom client wrapper (you’ll have to write one anyways).
On top of that, be a chap and use URLs in IDs because convenience is nice and it doesn’t take too much effort if you’re using something like jbuilder anyway. But don’t go thinking that you’ve magically solved all these problems just because you did.
We’ve been down this path to over-standardization of APIs before. It lead to the construction of the WS-deathstar. Let’s not repeat the same mistakes twice. Some times fewer standards and less ceremony is exactly what’s called for.
Mike Kelly
on 20 Dec 12JSON doesn’t have links. Establishing some basic conventions for that makes complete sense. Defining those conventions is called a spec. Giving a payload that follows conventions a descrnible name also makes sense. Establishing that is called registering a media type identifier.
It makes no sense to keep reinventing the linking wheel in every API. Pretending like a very minimal media type like hal+json is akin to WS-* is incredibly disingenuous and/or stupid.
Establishing a standard media type like hal with a bunch of conventions allows us to build generic tooling that can help with both serving and consuming payloads that contain links.
Being pragmatic is great, but misrepresenting a genuine effort to improve the status quo and improve the API ecosystem in a reasonable, non-complicated fashion as ‘hand-waving’ is kinda dumb dude.
Paul Cox
on 20 Dec 12The range of benefits depend on the type of API. We’ve started using hypermedia recently and it enables us to model the state machine of ‘creating an order’, so that our clients don’t have to each model client wide business logic, e.g. if product x, then need installation appointment… if a link to set installation is present, then you need to set a date, if it isn’t, then you don’t.
DHH
on 20 Dec 12Paul, how is that any clearer than explicitly listing the state? Making the connection that “if installation url present, then don’t set a date” seems no clearer to me than “if state = installation, then don’t set a date”. Both are business rules you’re going to have to explain in your API docs. Neither are self-explanatory or discoverable.
Glenn Block
on 20 Dec 12Hypermedia and bookmarks are two different things and not mutually exclusive. You can have a hypermedia API that returns links to resources which can be bookmarked for later use. The point is the link is discovered by a client rather than the client being hardcoded based on out of band knowledge.
I agree with Mike, the comparison to WS deathstar is really weak. Hypermedia is simply saying servers can be designed to return links to clients to figure out where they can go next. It removes that hard coding of urls and the hard coding of state logic in terms of which transitions are valid. That is it.
There is no WSDL, no consortium of vendors, no scheme requirements. It is just links.
DHH
on 20 Dec 12Glenn, “it’s just links” is exactly my point. Going down the standardization route for something so trivial is what I’m up in arms against. Arguably, the complexity:standardization ratio for this is even worse than it was for WS-* (at least that band of vikings had some serious astronautic ambitions that needed standardization).
Glenn block
on 20 Dec 12DHH
The server logic can change now as all the client cares about is, is the link present, not WHY is it present. There is a big difference. If the logic changes and it is encoded in the client, then every client has to change,
Simon
on 20 Dec 12Yep in the end you can’t escape explaining the API to the user. It should be as light as possible, but you can’t avoid a minimum level of info.
Since that minimum level of info is required anyway, you can include into it the URL <=> ID relationship.
Glenn Block
on 20 Dec 12DHH
By standardization, you mean publishing standardized media types that support links? ie JSON Schema, HAL, etc?
Mike Kelly
on 20 Dec 12A link is not just a URL. What you are talking about is the link target. A link is actually the context (which can vary), the relation (which can be a URL and contain the documentation), and the target (the URL you are talking about).
Having conventions for this stuff is useful because it means we can write tools against them. Writing these conventions down is a good idea (a spec). Giving the conventions a name so we can talk about them is also a good idea (a media type identifier).
@dhh – please could you explain which parts of the document describing hal’s conventions is “too complex”. It is about as simple as it could possibly get.
Glenn Block
on 20 Dec 12Simon
Hypermedia api’s don’t prevent documentation, that is a central part. The documentation centers around the link rels and payload, not the uri structure.
Simon
on 20 Dec 12Why would you duplicate the URL in each ID when you can define it centrally (in a client / documentation) ?
URL IDs can be useful, for example inside an enterprise which would define a set of rules for the sake of discoverability.
But in the general context of the Web, I just don't see the interest.Simon
on 20 Dec 12To complement my previous post :
URL IDs can be interesting in a company because in a company you can define general rules. You can state URL IDs for everyone, and then you get great discoverability for all applications in the company.
This is not possible on the Web. You always need to explain the API to the user.
Since you need to explain to the user your rules, you can define the best rules possible. And the best possible will probably not be URL IDs.
Simon Pantzare
on 20 Dec 12HAL+JSON and similar specs let us build reusable tools to handle things like relations and embedded resources. Every API has these. You could learn HAL in five minutes and save everybody consuming your API 5x that time if they can reuse finished HAL plugins for Backbone or whatnot.
Mike Amundsen
on 20 Dec 12“hypermedia lore”, “willy nilly”, “huge assumption”, “stumbling”, “fumbling”, “magically solved”
Phew! Such a rant we have today!
I imagine there is some thought behind this post, but it’s tough to find the pony in this room.
Are you annoyed at the concept of changing URLs? Redirects not working anymore for you? You proly never use them, right?
Are you angry at the IETF, W3C, etc? HTML simply a waste? HTTP worthless?, TCP/IP a bad idea?
Do you actually want all implementations on the Web to be “snowflakes” that require a new bespoke client for API, every app? Web Browsers using HTML a silly idea? Call control hardware using VoiceXML not profitable enough for you?
Oh, yeah – hypermedia. that’s the problem. it’s over-standardizing, isn’t it. Links and Forms are so over-standardized today. And that damn @rel property – woah!
There, hows that for a shallow, sarcastic rant? Close?
BTW – if you want to have a discussion about any of this, feel free to write me ([email protected]). Or post a blog entry that has more questions than claims.
Cheers.
Mark Baker
on 20 Dec 12“The idea that you can write one client to access multiple different APIs in any meaningful way disregards the idea that different apps do different things”
On the contrary, it’s an axiom of Web architecture; what you get when interface and implementation are properly separated. I can understand how one could be under the impression that there’s no qualitative difference between hypermedia and more typical “APIs”, but once this point is understood the value of a uniform interface becomes clear. It’s a short hop from there to understanding hypermedia’s role as part of that interface.
I’ve written about this before, though in the context of bashing WS-*;
http://www.infoq.com/news/2006/12/separation-of-concerns
[email protected]
on 20 Dec 12Wow. Very surprising (in a bad way).
Basic Web architecture: 1. expose resources, 2. resources have names (URL), 3. allow basic actions on these (GET, DELETE, PUT, POST) as needed. 4. Include URLs (as links/forms) in representations.
OK, now build your API. Please do NOT start w/the API and work back to basic web architecture. Servers should always provide URLs, NOT the client (by way of snowflakey construction algorithm).
Paul Evans
on 20 Dec 12If a client bookmarks a URI, and you, as the service want to move the associated resource, you can have the current URI return a 301, with the new URI in the ‘Location’ header.
If the client is a good client, and honors the semantics of HTTP, the client should be able to update its bookmark for future use.
There is no magic with hypermedia APIs; but they do allow the benefit of looser coupling (the server controls its URI space).
One more thing - hypermedia can do more than just encode an endpoint. A hypermedia control can be augmented to contain additional metadata that could be useful to the client. For example, a hypermedia link can encode additional attributes beyond just an href, rel and type; a link can encode any metadata you can dream up that might be useful to the client. You might decide to add a “avgRespTime” attribute to each link. This could be used by the client to draw some progress bar when invoking the link. Or, maybe you attach a “is-degraded” attribute to the link indicating if the feature the link represents is experiencing a degraded-mode, and that clients can use this knowledge to display the GUI with a grayed-out button with a warning message or something (as opposed to the client discovering this after attempting to invoke its hard-coded known URI template).
In my humble opinion, the ESSENCE of a hypermedia is a means for conveying rich metadata to clients. The most fundamental piece of metadata is the presence of the link itself in communicating what actions can be taken on the enclosing resource; lack of a link means an action is not allowed. With some imagination, additional metadata can be attached to the hypermedia to provide a richer experience for the client.
Glenn Block
on 20 Dec 12Mike’s ‘Miss Grant’s Controller’ (https://github.com/mamund/miss-grants-controlller example) offers another insight into the benefits which is beyond just binary presence of a thing or not. It is allowing the system to guide the client toward a goal it is trying to achieve. Today the path to the goal is statically encoded in the client which makes workflows themselves very hard to evolve. In a hypermedia system the path is split between client and server. The server offers up the options and the clients choose them. The client is still making decisions and doesn’t have to be dumb, but it relies on the server to tell it what is available rather than hardcoding. This allows the workflow to evolve yet existing clients don’t break.
H. Sam
on 20 Dec 12DHH
Totally agree. imo Hypermedia API is just going to complicate things to solve minor problems not worth it. I mean how many times will ppl tend to change their API in the first place If ppl change their API more than two times then the real problem in first place lies in the bad initial specification and implementation of their API. Hypermedia API is just a workaround to solve the problem of bad specified API’s, but it will not solve the real cause but forces to write more complex client code.
Stefan Tilkov
on 20 Dec 12First of all, I find it quite ironic that you accuse the REST folks of trying to follow the WS-* path. None of what is being suggested now has ever been part of that mess, or has any similarity to it. On the contrary: Links are part of what we know as the web, always have been, always will be, and predate SOAP/WSDL/WS-* by several years.
A few years ago, you could have said that nobody uses PUT and DELETE, so let’s ignore it; nobody cares about the distinction between POST and GET, so let’s continue to use them as if they were synonyms; you could have ignored caching and conditional requests because those just overcomplicate stuff. You didn’t because you (or rather Rails) followed the architecture of the Web, and thereby helped make (part of) REST mainstream. Hypermedia is the one aspect that was always missing.
This REST stuff is most definitely not something that’s a reaction to anything in the WS-* community, and not even remotely comparable. In fact I’m quite sure that everyone who has invested years arguing against WS-*, as I have (and still do), will be really pissed that you used that argument. Which may have been your intention :-)
Hypermedia is what allows your browser to talk to any Web server, not just the one it knows, whether it’s through links or forms. It’s what allows you to change your mind on the server side, maybe not in terms of changing the sequence of path segments in your URI, but maybe when you split a system into two separate ones, host some resources in an external service, or link in something from elsewhere. Via a form-based approach, you can have a client send information in a subsequent call that it doesn’t even understand (just as a hidden field in a form does in HTML).
The use of hypermedia for machine-to-machine communication is still quite new, and isn’t widely adopted yet. Having decent, ideally standardized support for hypermedia APIs in Rails would have the potential to change that. I’m not even sure it’s time for that yet, but I think you vastly underestimate the potential here.
Glenn Block
on 20 Dec 12H. Sam
Tell that to your browser. Navigating through links is what the web was built on ie the hypertext in HTML and HTTP.
It has nothing to do with poor API design. It has to do with systems that grow and expand over time and requirements changing as that happens,
EJ
on 20 Dec 12Glen, machines don’t navigate links, humans do, so the browser analogy doesn’t quite work.
Glenn block
on 20 Dec 12Machines can and do navigate links. Here is a very recent example c/o @sklabnik. http://developer.github.com/v3/pulls/
Is github not serious enough?
Glenn Block
on 20 Dec 12Oops, meant @steveklabnik,
SOAP
on 20 Dec 12SOAP SOAP SOAP SOAP SOAP SOAP SOAP SOAP SOAP SOAP SOAP SOAP SOAP SOAP SOAP SOAP SOAP SOAP SOAP SOAP SOAP SOAP SOAP SOAP SOAP SOAP SOAP SOAP SOAP SOAP SOAP SOAP SOAP SOAP SOAP SOAP SOAP SOAP SOAP SOAP SOAP SOAP SOAP SOAP SOAP SOAP SOAP SOAP SOAP SOAP SOAP SOAP SOAP SOAP SOAP
Alex Bunardzic
on 20 Dec 12Whoa, I had to pinch myself and double-check the URL of this post. Is this really possible? Or is it just a prank post in the spirit of ‘damn the torpedos, tomorrow is the doomsday anyway (December 21), we’re going down with the ship!’ type of a reckless writeup?
The type of reasoning demonstrated in this post is what you’d expect from entrenched Ivory Tower old school technologists who are brandishing a top-down control hard on. Basically, the mainframe crowd where for every possible interaction, the purpose of the conversation needs to be established beforehand. You know, as in a military situation, where you’re not allowed to engage in any kind of exploratory conversation.
The real beauty and the true strength of the world wide web is that the purposed of the conversation does not have to be established beforehand. Anyone can join the conversation, the web is all inclusive. Not only that, but anyone can join the conversation AT ANY TIME, and should be able to replay whatever transpired before they joined in, and in that fashion learn more about what’s going on.
The most important part of this all-inclusivity is that no one should be expected to read up on anything before jumping in and joining the fray. Things change, things grow, things decay, it’s a never ending learning process. The mainframe Ivory Tower mentality cannot stomach that, but the rest of us are cherishing it.
If DHH’s writeup is not a prank post, then I’m sadly forced to say it takes the cake as the stupidest post in 2012!
M. Schweizer
on 20 Dec 12What we agree on is that APIs ‘should’ not change. But if it changes there should be a reasonable way to recover. An easy way is – as described by Paul Evans – that a 301 is returned with the information with the new URI.
If you want to construct the URIs on a client with a template because of easier navigation (which is a reasonable requirement), the above approach could be extended to return a 301 with a new template. This would have some restrictions, and it does not mean that you can change you API by any means (then I’d call your API not an API anyway).
If I’d have store references of a fairly simple flat resource world with short an simple access path I would go the above mentioned first – most clean way and pass around URIs. If I’d have a more complex world I’d design and describe the API well and would not design with discoverability in mind in first place – because thats simply not the way a consumer of my API would connect to my system/service.
Aliostad
on 20 Dec 12Hypermedia as a semantic web of interrelated resources are “useful” but taking that any further is the road to nowhere.
The one issue I am surprised to be ignored by the REST community is that for a client, understanding (and implementing) the URL templating is far easier than understanding (and reacting to) the semantic axis of the hypermedia. As such, we are not going to save the clients any sweat by providing hypermedia.
Another point to consider is that hypermedia along with resource URI is part of the server’s public domain as such client has the right to take a dependency upon.
Mike Amundsen
on 20 Dec 12Aliostad
...for a client, understanding (and implementing) the URL templating is far easier than understanding (and reacting to) the semantic axis of the hypermedia.
If those were your only two choices, yes, pick client-side URI templating (ala HTML.FORM or URI Templates) over “semantic axis of hypermedia” (cool phrase BTW, that yours?).
But, of course, those are not the only options.
The assumption that this one word (“hypermedia”) by requirement introduces complexity is a great way to defend ignoring the concept, but there is nothing I know that offers much proof of this assertion.
Yes, you can make coding clients consuming responses that contain links and forms more complex than clients consuming responses without links and forms, but this is not a MUST nor a REQUIRED approach. I’ve coded quite complex clients consuming RPC-style Web APIs, too. That doesn’t mean RPC-style APIs required me to do so nor does it prove that RPC-style APIs are more complex, can add more “sweat” to clients, etc., etc.
Cheers.
Glenn Block
on 20 Dec 12@Aliostad, in way do you think it is easier? Using templates requires the client to have logic that looks for the template and then does the substitution. Are you saying it’s easier because the server is feeding the client what data it must provide the server?
Maarten O.
on 20 Dec 12I alsof don’t get why this needs standardization. If you need to be that flexible, build a web-app. Adding new features doesn’t mean your clients magically know what to do with it or how to display new kinds of resources. Are we also going to send over templates? Sounds a lot like CSS to me then… The idea of sending relational resource links/urls with an object doesn’t sound so special to me…. I think all the ‘standards’ for building a great API are already in place. Doesn’t seem worth all the hype indeed. Disclaimer: I’m just a newbie here, and might not even know what I’m talking about, But man….. some people here are quite easily offended
Aditya Rustgi
on 20 Dec 12This discussion made me dig in and fetch one of my favorite quotes:
“In theory, theory and practice are the same. In practice, they are not.” ― Albert Einstein
Maarten O
on 20 Dec 12I think you are thinking of different templates. Aliostad means URI templates. (http://tools.ietf.org/html/rfc6570) which are simply a skeleton for a URI with placeholders rather, not CSS / style templates.
As to adding new features, I agree clients don’t magically know what to do with them. Clients will have some hardcoding. Adding new links means newer clients know about it and older clients continue to work. The difference is they are coding against “rels” which identify the types of links they should care about rather than hardcoding logic for building URIs on the client.
As far as standardization, “rel” and “profile” already exist in HTML. Links already exist in ATOM. IANA (http://www.iana.org/assignments/media-types/index.html) is designed to allow new media types to be published and new standards to get created as we learn more.
The proposal here is to take an already existing concept which has worked well for simple browser rendering and to take it to other formats which are used for machines / code interacting with those apis. If there’s no standardization, then everyone is going to do their own thing. Not all standardization it bad, HTML is standardized, ATOM is standardized.
Standardization can be a bad thing when it’s abused couch WS-*.
lutter
on 21 Dec 12The post kinda misses the boat on discoverability, partly because DHH assumes that there’s only ever one incarantaion of any API.
Jon Moore
on 21 Dec 12Cool URIs don’t change, but sometimes you need them to do so. Ever moved images from one CDN to another? If you’ve got the “here’s how to find the URL for the {x} by {y} pixel version of this image” template baked into your mobile app, you’re up a creek. Redirects are really painful to chase when you’re across a WAN from the server. Is it that hard to understand that this problem might generalize beyond images to other parts of your API?
Dave Giunta
on 21 Dec 12I think what’s “hand-wavy” about this discussion is the fact that there’s not a lot of good, simple examples of well-designed hypermedia APIs that aren’t buried in whitepapers and academic theory. I hear lots of conjecture about how this will/should work, but not a lot of real-world examples. As such, I am excited to dig through the github pull requests api that was linked earlier in the thread. That’s literally the first example of a hypermedia API I’ve seen referenced that was real. I haven’t dug into that yet, so you’ll forgive my ignorance for the rest of this comment.
That said, my general impression of they hypermedia idea is that it seems to assume that machines can be as intelligent as humans. Previous commentors making references to how links and forms are the basis of the web are correct, of course… but they are the basis for the web that’s being browsed by human beings who can look at something being spat back at them, comprehend (often compensating for badly designed UIs) what that is, and eventually find what they need in order to get to the content they are after.
I can’t imagine making a program that “knows” how to be quite that intelligible. I mean, this is a bad example, I’m sure, but we’ve all had to scrape some html pages for content in the past. And that’s a lot like what the hypermedia api concept sounds like to me. But that’s also kind of a nightmare to deal with. Those scripts become coupled to the sequence that the data is doled out by the server. As such, it’s difficult to make those scripts tolerant of changes in that sequence.
Can someone explain to me how hypermedia apis are better than this, and how you could possibly write a client that could be tolerant of changes in the api, or intelligent enough to “know” how to navigate the links and forms that are provided by a server?
Mike Amundsen
on 21 Dec 12Dave Giunta
I think what’s “hand-wavy” about this discussion is the fact that there’s not a lot of good, simple examples of well-designed hypermedia APIs that aren’t buried in whitepapers and academic theory.
Are you saying if X number of examples were identifiable “in the wild” then all hand-waving has gone away? I doubt it, but tell me the exact criteria under which “enough” (quantify please) examples (specify an acceptable example) appear (indicate accepted locations) such to end this part of the protest.
I suspect that most will argue this point in unspecified terms thereby making it impossible to get past this straw-man. But I leave it to you to put me straight on that one.
my general impression of the hypermedia idea is that it seems to assume that machines can be as intelligent as humans.
While I do not doubt this is your impression, it is not a valid one to carry. In fact, hypermedia is primarily used for human interfaces, not machines. But even so (as already indicated in comments above by Glenn Block), there are some machine examples worth exploring (anyone code a spider bot? Google mabye? huh).
But, even more to the point, this “assumption” is really just a handy way to tar the concept of hypermedia w/ a specious claim that no-one actively using hypermedia makes. If you want to speak directly to those who make the claim that “machines can be as intelligent as humans” do it, but that’s not going on here. And, BTW, when you find them punch them for me, eh?
we’ve all had to scrape some html pages for content in the past. And that’s a lot like what the hypermedia api concept sounds like to me
Seriously? you think hypermdia === screen scraping? huh.
Can someone explain to me how hypermedia apis are better than this, and how you could possibly write a client that could be tolerant of changes in the api, or intelligent enough to “know” how to navigate the links and forms that are provided by a server?
See, now those are actual questions. I like that.
how hypermedia apis are better than this
Sorry, I have no idea what “this” or “better” is here. You still on screen scraping? or something else?
how you could possibly write a client that could be tolerant of changes in the api
Tolerant is easy. Don’t change the API, just add new features. No need for hypermedia at all.
How about changing URIs w/o breaking a client? Easy, reduce your “promised” URIs to a minimum (1 if possible, but a few is fine). Guarantee to always respond to them, even if it means redirecting. Guarantee to use a pre-defined set of link identifiers (HTML.A@rel, for example) for all links. Tell clients to “code for the identifiers, not the URI” and now only bad coders have clients fail when the URIs change.
There are more things like 1) using standardized templates for payloads and URIs (HTML.FORM, URI Templates, etc.) – those kinds of silly things have worked for twenty years so far. And 2) defining problem domains in terms of vocabulary and state transitions and promising to include them in responses such that clients can recognize and activate them. And so forth (this is only a comment, right?).
intelligent enough to “know” how to navigate the links and forms that are provided by a server?
Heh, you got me with that one – a second time, too!. Again hypermedia doesn’t promise that and doesn’t need that. You may have some pals who talk that way but, as i said above, go talk to them – no one here that actually uses hypermedia in a meaningful way says that kind of stupid stuff.
And, seriously, when you find those people talking stupid punch them for me!
Cheers.
Glenn Block
on 21 Dec 12Dave, I don’t believe in fairy pixie dust clients that somehow magically click their heels saying “There’s no place like hypermedia” and everything just works.
There are people that do see that as one of the hypermedia benefits, but those positions are not what I (or I believe others) are trying to promote.
Clients have knowledge of what to expect. In the real world the average hypermedia client knows about certain types of links that may come back. That is all the rel is, and identifier that the client can use to match up against links it cares about. If new links come back that it doesn’t know about, well it will ignore those links. Although the client knows about a set of rels, it does not know if they will or will not be present. The advantage is that logic sits with the server where it can, and often does at some point change. The change might be due to several reasons, including scale as the server can tell the client go get that resource over here rather than where you got it last time. The client also doesn’t know or care about the url, all it knows is if the rel is this, and that is a resource I want to access, follow that url.
As to the new links that were returned which that client ignored, newer clients can come along and they are coded to understand those rels. They know what to do with it so they follow it.
In both cases I mentioned there’s no magic and there’s still hard coding of some logic. It’s just the type of logic is different than it is today. Instead of hardcoding uris, and logic of whether or not those resources can be called, the logic is looking for the presence of rels, and decided which one to access. And often the decision of what to do is still decided by a human being, or it maybe be a combination where the machine first looks at the available set and if it’s logic allows it to proceed it does, otherwise it gets interaction from a human.
EJ
on 21 Dec 12Mike and Glen, why is all the sarcasm and condescension necessary, eh? Pointing to more real life examples like the github one would be much more helpful than belittling folks trying to have an honest debate.
Glenn Block
on 21 Dec 12EJ
My pixie dust comment was not meant to be belittling to Dave. It was being sarcastic about the view that some have that hypermedia clients can somehow just magically find their way and learn new things. Forgive me Dave if it came off as against you, it really was not.
Glenn Block
on 21 Dec 12ATOM is probably one of the most salient examples though often people discard it because they think a feed reader is trivial compared to a complex business system. I can say that I know of at least one case where a very large travel agency in Europe utilized ATOM and hypermedia and it was not for a simple feed reader.
In this case there were large amounts of data transactions generated each day with archive servers housing everything but the most current data. There are distributed clients all over europe which all need to synchronize with the data. Whenever the clients connect they go to the front of the feed and work their way back until they find the last entry that they had locally. As they are navigating through the feed they follow “previous” links to get to the next page. Each time they click “previous” they can be directed either to the same server or to one of the servers in the archive. Granted this example doesn’t apply for all the systems being built, but it is an early real world example.
Aside from that, I know of several very big real world companies that are using hypermedia, like VERY big, but, I am not allowed to speak about them because I worked with those companies confidentially.
I don’t think we have a ton of really good public examples yet, that doesn’t mean it’s not being used as many that are using it, use it internally or don’t publish their specifications with IANA.
Glenn Block
on 21 Dec 12EJ on the condescension, I did say this:
“There are people that do see that as one of the hypermedia benefits, but those positions are not what I (or I believe others) are trying to promote.”
To make it clear that my comment was in regards to that perspective…..
Mike Amundsen
on 21 Dec 12EJ
Pointing to more real life examples like the github one would be much more helpful than belittling folks trying to have an honest debate.
Good point. I’ll drop that character now. I’m going to be straight w/ you and ask you some simple questions:
1. What specifically do you find “lacking” in the idea of using hypermedia in responses?
2. Please provide real-world use-cases where non-hypermedia styles system design was superior to hypermedia-style design.
3. Can you provide an example where your use of hypermedia-style was especially unsuccessful. For example, you had difficulty writing the client, implementing the server, found it to be particularly buggy or unreliable at run time?
4. Can you point to clear examples where using hypermedia failed to provide the ability to modify or evolve the interface as you were promised by those who designed the system?
What I’ve tried to cover here are the common assertions of those who claim hypermedia-style designs fail to match up to RPC-style designs. I’ve also added a general “I find this concept wanting” assertion. Maybe you have other tangible examples and assertions I’ve missed here., Feel free to add them to the list.
I’ve avoided making claims for or against RPC or Hypermedia designs. Feel free to do that, if you like. I’ve tried to couch the questions such that you are not faced w/ straw-man examples, or forced to try to prove there is not a teapot circling the globe on the opposite side of the world, etc.
Finally, if you’d rather not flood David’s comment section, feel free to write me directly (mamund AT yahoo DOT com) or post a blog entry of your own with answers to these questions.
Cheers.
Dave Giunta
on 22 Dec 12First off, thanks guys for adding a bit more clarity to the discussion.
I guess when I was asking for more examples, I was asking for something more like a scenario that shows a client with a specific objective, that accesses an API designed using Hypermedia to achieve that objective. I know that sounds stupid basic, but in the few examples I’ve seen (excluding the github one for the moment) they’ve stuck to theoretical concepts, rather than showing how a client would actually navigate links and forms embedded in the API responses. If there’s a good demo of this on the web somewhere, I’ve obviously missed it and would appreciate the linkup.
I’m genuinely not trying to lambast the idea of Hypermedia concepts… I’m just kinda ignorant of them, and am interested in actually understanding how you guys who are obvious proponents of this idea believe they work well, and in comparison to Rails-flavor REST (which I hesitate to even say because I know how much baggage there is attached to that phrase).
hypermedia is primarily used for human interfaces, not machines.
This is curious to me because I thought we were having a discussion about machine-consumable APIs. But that’s good to know, because that’s a large part of the thing I find confusing about Hypermedia.
This: there are some machine examples worth exploring (anyone code a spider bot? Google mabye? huh)
Followed closely by: Seriously? you think hypermdia === screen scraping? huh.
is a little confusing to me. I mean, on the one hand, it sounds like you’re suggesting that Google, which aside from using some additional meta data and whatnot, to my knowledge, largely looks at the contents of web pages to discern the bits they care about. Which, yes, would qualify (as I understand it) as a machine utilizing a hypermedia api (ie – the web)... But isn’t that kinda by definition screen scraping?
Moreover, I would hardly call the amount of effort that Google must have (and still, presumably) gone through to write their spider bot client trivial. Certainly much less trivial than one would hope writing a prototypical hypermedia-compliant client would be.
As for your list of questions, honestly, I was trying to ask those exact same questions of the people in this thread who are pro-Hypermedia apis. Specifically, trying to get them to explain to me how they believe Hypermedia apis are a better design than non-hypermdia apis. Sadly, I couldn’t come up with a good list like you did, which would’ve probably been much more constructive than my earlier ramblings.
I would love to answer your questions, too, but honestly, I don’t feel as though I understand hypermedia api design well enough to try to compare these two. What I know is what I’ve heard and seen in presentations and whatnot, which have largely stuck to academic descriptions of hypermedia api design concepts, but have (at least what I’ve seen) left out practical, demonstrable, real-world examples. Which is frustrating for someone who doesn’t have a CS degree trying to understand what this is all about.
Thanks again.
Caleb Wright
on 22 Dec 12How would you handle urls with the API version included? Do the urls skip /v1 or would they have the API version requested? (i.e. a request for /v1 always gives /v1 while /v2 always gives /v2)
pb
on 22 Dec 12David’s right, the whole HATEOS thing is not really worth it. The two Mikes don’t make any compelling arguments whatsoever; just name calling. The only time providing URLs in the response is helpful is if the client would actually want to use them immediately. Otherwise documented APIs are far preferable. This is pretty obvious.
Stefan Tilkov
on 22 Dec 12@pb, seriously? Documenting the API creates metadata about your URIs, something you’ll have to maintain in synch with your APIs implementation. All of your clients will become dependent on the URI structure you chose at one point in time. Including links means client will adapt dynamically to changes to that structure. Including forms (or similar) means they can adapt dynamically to changing data requirements on the server side. What you’d document would be the places where URIs appear in responses, as opposed to the characters that make up URIs. How is it ‘obvious’ that not doing this is ‘preferable’?
Aliostad
on 23 Dec 12@mamundsen @gblock yes I meant UR”I” templating. Apologies for not sticking to canonical name ;) And yes, I have not taken “semantic axis” from someone else although considering RDF mentality where I come from, it is a natural word and might have been coined before.
I am actually not saying hypermedia will make implementation more complex although it will surely add complexity on the server side. I am saying it does not solve any real problem for the client and its effectiveness is limited.
This is like when I buy something (let’s say a fridge, an appliance), I get all telephone numbers I need in case there is a problem: one for returns, one for servicing, etc. This is really useful since even if I buy a similar item, these numbers could have changed (although never break a permalink) and I can call those numbers instead of assuming numbers are the same. However, if there is a fault, having numbers does not save me from actually talking to them, arrange for the engineer to come, staying home on the day and then paying them. In fact, I could just pick a yellow pages and find numbers so not a great deal of help.
With regards to HATEOAS, I suppose I have made it clear before that it breaks CSDS constraint hence do not agree with it.
Glenn Block
on 23 Dec 12Aliostad
We don’t agree, but let’s agree to disagree.
pb
on 24 Dec 12Why are the hypermedia apologists such a-holes? APIs that conform to their world view are pretty much non-existent and yet they speak as though everyone else are the idiots. DHH makes a fairly reasonable argument and gets ridiculed.
Mike, we cannot answer your questions because hypermedia APIs are essentially non-existent.
The hypermedia apologists note the benefits of hypermedia in a web browser but fail to articulate how such a thing would be beneficial with software clients (which sure as heck do not “bookmark” things).
Stefan Tilkov
on 24 Dec 12@pb, if your response to my last comment is calling me an “asshole”, I admit I’m not up to arguing with you.
Karolis
on 25 Dec 12Seriously though. Can somone give a real example of where a hypermedia API is a good idea like so many poeple have been asking?
pb
on 25 Dec 12@stefan, more the two Mikes and Alex. But your first comment mis-represented the original reference to WS-deathstar overstandardization.
Pharmc205
on 26 Dec 12Hello! gdkfadc interesting gdkfadc site! I’m really like it! Very, very gdkfadc good!
Pharme780
on 26 Dec 12Hello! gecadfd interesting gecadfd site! I’m really like it! Very, very gecadfd good!
John Teague
on 26 Dec 12The best (though certainly not the only) scenario where hypermedia is valuable is where you have have multiple clients consuming the api and you don’t have the resources to update all clients when new features are available. Mobile development is a good example, where you have multiple clients (iphone, android, windows, etc..) Using the hypermedia approach, you can add features, connected by links and represented as a resource) and your clients should at a minimum not break when the new resource is present. You can then update all of your clients as time and resources allow.
We are using hypermedia in an enterprise environment for the same reasons (but not mobile). We have a distributed environment, where each application must interact with each other. We will have dozens of different web clients and back end processes consuming these apis. When we add features, we will update the primary client and update other clients as necessary and as time permits. Not all clients will need to handle the new features.
So we’ve decoupled the clients from the server and each can change as needed.
This discussion is closed.