- From: Ryan J. McDonough <ryan@damnhandy.com>
- Date: Sat, 15 Feb 2014 11:49:29 -0500
- To: Markus Lanthaler <markus.lanthaler@gmx.net>
- Cc: Mark Baker <distobj@acm.org>, public-hydra@w3.org
- Message-Id: <B7729AC8-58EA-4E00-B925-3B29B9702377@damnhandy.com>
Mark, thanks for taking the time to elaborate on this. I didn’t quite see it until you gave that edge case. I have actually witnessed something similar :) On Feb 13, 2014, at 3:14 PM, Markus Lanthaler <markus.lanthaler@gmx.net> wrote: > On Wednesday, February 12, 2014 4:00 PM, Mark Baker wrote: >> On Thu, Feb 6, 2014 at 12:29 PM, Markus Lanthaler wrote: >>>> On Wed, Feb 5, 2014 at 2:06 PM, Markus Lanthaler wrote: >>>>> Great. So, before I close this issue I would like to get Mark's >>>>> opinion (that's why I CCed you). Mark, do you agree with Ryan on >>>>> this or do you think Operations do violate REST's uniform >>>>> interface constraint? >>>> >>>> I believe they violate the constraint, yes. >>> >>> Let me ask a provocative question: Why do you think Operations >>> violate the constraint whereas link relations, which further >>> describe a potential GET request, don’t? After seeing Mark’s response, I see where he's going. In the search link example we have: Link: <http://example.com/search>; rel="search"; title="Simple Search” In Hyrda, the equivalent could be: <http:example.com/foo> hydra:search <http://example.com/search> . What’s not happening here is asserting anything about how the HTTP interaction works. With Link headers and hydra:link, the assumption is that we’ll dereference the search resource using an HTTP GET request and we don’t suggest anything else about how the HTTP interaction model works. We’re not even implying that hydra:search extends HTTP GET in anyway. >> >> That's a really interesting question, not provocative in the >> slightest. But it's actually a very subtle difference between the two >> approaches, as in theory, each could be used/misused in the same way. >> The difference is in our expectations as developers and, concretely, >> how we write our clients to react to the different kinds of >> information available at design time and at run time. That would take >> a lot of time to describe in a compelling way, I expect. >> >> Luckily though, I can at least illustrate *part* of the difference >> with a simple edge case that demonstrates a distinct loss of >> visibility in the message exchange (commonly, a sign of a violation of >> the uniform interface constraint). >> >> Say we have a service which we describe with a custom hydra:Operation >> called "Clear", which uses the HTTP DELETE method, but also expresses >> that the target resource will create a new "empty" resource (of *it's* >> choosing, so PUT isn't appropriate). This might be used on a wiki, >> where you wouldn't expect that a resource return 404 or 410 after a >> DELETE, but instead just show a blank page. A semantic wiki could do >> something similar, returning an empty graph in its RDF representations >> instead of 404/410. > > OK.. so the server just deletes the content of a representation or replaces > it with something else; something the server chooses not the client. Like an > empty template e.g. > > >> So a client consuming a Hydra description and recognizing the "Clear" >> operation would know things that, for example, an HTTP proxy wouldn't >> know by only understanding the DELETE operation. > > Yeah, even though here the knowledge would be quite limited (it knows that > he *likely* won't get a 404 if he GETs the resource immediately after) What Mark is getting here is that HTTP only sees the HTTP methods and the message body, it can’t see the utility offered by Hydra that maybe visible to Hydra-enabled clients. A caching proxy is going to do what HTTP says and that may not be what the application thinks it means. I can give a real-life example: a few years back, we had a team building an a social API for mobile devices. As an efficiency, they wanted to use a patch format to express deltas from the device to the API. That’s all well and good, but for some reason they were instant on using PUT instead of PATCH or POST. Somehow, I couldn’t convince this team that a patch format being issued over PUT but was just fundamentally flawed from the get go. They had to learn on their own.They didn’t also realize that they were behind a caching proxy (a lousy one at that, might have been home grown too) and other intermediaries from various mobile carriers. The hard lesson learned was that the caching proxy interpreted the PUT as the new state of the resource rather than applying deltas to the resource. Now clearly there were other issues here as well such as tool selection, configuration, and messed up cache-control headers, but the point here is that there are certain semantics to HTTP methods that are fixed and can’t be extended. More importantly, intermediaries are going to be very literal in how they interpret HTTP. > >> It might choose, for >> example, to dispense with any error checking for 4xx since it knows >> that after a successful DELETE/Clear, that it will get back a blank >> page. > > Here you lose me. Why should it stop 4xx-error checking after a successful > DELETE/Clear? To make this crystal clear, are you saying > > - the client invokes the DELETE/Clear The subtlety is right there: DELETE != Clear and Clear != DELETE. Clear is not a concept in HTTP and DELETE is not a concept in the context of this applications function. > - checks the status code of the response > > and *then* stops to check for 4xx errors? Why should it do so? What would it > do then if it tries to dereference such a resource and get a 404? Would it > break in a way it wouldn't otherwise? I may do this if I wanted to guarantee the state of the resource after executing the DELETE. I work in finance, so I’m a little more paranoid about those things. But yes, if I issue a DELETE and I get back a 200 (Ok), I know that the operation succeeded but I may want to double check and see if I got a 404 or 410, because that’s what I’d expect from an HTTP DELETE. But yeah, typically, I might trust that a DELETE with a 200 (Ok) response would have removed the resource. But If the removal of resource is somewhat complicated and takes time to remove, I might get back a 202 (Accepted) indicating that the server got the request to delete the resource and it will do it later. In which case, I may poll the resource until I get a 404 or 410. If I’m still getting a 200 (Ok), I might be a bit more concerned and have to inspect the response body to see what’s up. > > >> That is the loss of visibility I mentioned, and results from >> extending the contract between client and server, rather than reusing >> the existing one. > > I understand in principle what you mean (the proxy doesn't know as much as > the client does) but can't see how that's a problem as long as you don't > violate the semantics of HTTP operations. The subtlety here is that it’s not about describing the HTTP “operation” but the message you are sending from the client to the origin server. This is what Roy was talking about in "REST APIs must be hypertext-driven” [1]. Hydra Operations are on the verge of conflicting the 2nd bullet point. > On the other hand, since all > messages are self-descriptive and the ApiDocumentation is machine-readable, > even intermediaries are in the position to parse that "knowledge"--something > which is impossible with HTML forms e.g. It is a possibility, but I wouldn’t approach Hydra under the assumption that this would happen. > >> FWIW, using a predicate based approach, this could be done with a link >> annotated with atom:edit or similar. There'd be no loss of visibility >> because the client would have no additional expectations based on >> previous interactions. > > Why wouldn't it? I really see no difference between an xyz:clear link > relation and a Clear operation. If you talk about link relations, the client > *always* has "expectations based on previous interactions", namely the one > in which he got that link. The HTTP methods are fixed and already have a very generic, but well defined semantics that cannot be changed until the HTTP spec changes them. They cannot be subclassed or extended. Once a message leaves the client, this is JSON-LD request sent over POST and that’s all anyone sees when it’s in flight. At present, Hydra suggests that we can subclass the HTTP methods with additional semantics. Take the following example: { "@context": "http://www.w3.org/ns/hydra/context.jsonld", "@id": "http://example.com/search", "hyrda:operation": { "@type": "Search", "method": "POST" "expects" : "#SearchRequest" } } To the uninitiated, this might suggest that we are subclassing POST and adding additional semantics to it, especially if we include return types and status codes. Functionally (not semantically), we get the same thing from this: { "@context": "http://www.w3.org/ns/hydra/context.jsonld", "@id": "http://example.com/search", "hydra:operation": { "method": "POST", "expects" : "#SearchRequest" } } And we could also do something like this: { "@context": "http://www.w3.org/ns/hydra/context.jsonld", "@id": "http://example.com/search", “searchOperation": { "method": "POST", "expects" : "#SearchRequest" } } The point Mark is trying to make is that the effort needs to be spent on the messages themselves (i.e. #SearchRequest) rather than the HTTP method. In reality, the network only sees POST and #SearchOperation is only visible from a Hydra-enabled client. I guess another way to look at this could be through the lens of cURL. I can take any HTML Form and submit to it from cURL, or any other HTTP client. Now granted, I have to piece an application/x-www-urlencoded message together by looking at the HTML form and forging the same request for cURL. But let’s take my search example from my initial response to Mark: { "@context": "http://www.w3.org/ns/hydra/context.jsonld", "@id": "http://example.com/search", "operations": [ { "@type": “NonIdempotentSearchOperation", "method": "POST" "expects" : { "@type": "hydra:Class", "supportedProperty": [ { "property": { "@id": "q", "@type": "rdf:Property", "range": "http://www.w3.org/2001/XMLSchema#string" }, "required": true, "readonly": false, "writeonly": false }, { "property": { "@id": "count", "@type": "rdf:Property", "range": "http://www.w3.org/2001/XMLSchema#integer" }, "required": false, "readonly": false, "writeonly": false }, { "property": { "@id": "start", "@type": "rdf:Property", "range": "http://www.w3.org/2001/XMLSchema#integer" }, "required": false, "readonly": false, "writeonly": false } ] } } ] } Activation of this operations could generate the following request message: { "@context": "http://example.com/context/search", "@type": "#SearchRequest", "q": "cat breading", "count": "30", "start": “0” } I could use cURL to test this out using the following: curl -X POST -H "Content-Type: application/ld+json" -d '{ @context": "http://www.w3.org/ns/hydra/context.jsonld", "@type": "#SearchRequest", "q": "cat breading", "count": "30", "start": "0" }' http://example.com/search No doubt, cURL is not a Hydra-enabled client, but i can certainly test out aspects of a Hydra-enabled API with cURL. But note that with cURL, the semantics of “NonIdempotentSearchOperation” and completely invisible. This is true of just about every single HTTP client and intermediary on the planet. But, I can issue this POST request and my API will likely respond to it. The reason why I have been citing HTML forms as an example is that their only role is to collect and format data into an application/www-x-form-urlencoded message body and send it over POST or GET. HTML Forms don’t make any assumptions about the server it’s calling, responses it may get, or the role of the form. They are simply means to format and produce a message and send it over an HTTP method. In my view, Hydra needs to offer the same utility. But, we can do a lot more in terms of how we instruct clients to format and produce messages to the server. > > Taking AtomPub as example, a client has an expectation that a POST to a > collection URL results in a (media) resource being created. He has that > expectation due to a previous interaction (retrieval of the service > document). > An intermediary lacks that knowledge. Is that also a violation of > the constraint then? What AtomPub does is inline with HTTP and REST in that it says: “format a message this way and send it to me over this HTTP method with this Content-Type header.” AtomPub does not attempt to decorate the HTTP methods with additional semantics. HTML Forms, OpenSearch, and XForms also exhibit these same traits. But Atom and AtomPub don’t have the equivalent of Form or Operations either, so it’s quite an apples-to-apples comparison. > So, whether I have a > > <collection href="http://example.org/col" > > ... > </collection> > > in an application/atomsvc+xml document or a > > "@id": "http://example.org/col", > "operation": { > "@type": "CreateResourceOperation", > "method": "POST" > } > > in a in an application/ld+json document doesn't make any difference IMO. It gives the appearance that it’s subclassing HTTP POST, which is not actually possible. If you’re correctly working within the constraints of REST, you don’t have a “CreateResourceOperation” that requires some type specific format and is sent over post. Ideally you'd have a “CreateResourceMessage” that is sent over POST. Describe the thing you want to do in the message body and how to send it, not a layer over the HTTP method. > > > I'm sorry, I'm really trying to understand your concerns but apparently I > still don’t. I think I might understood this and hopefully I’ve parsed Mark’s response correctly and didn’t throw more fuel on the fire of confusion :) > -- > Markus Lanthaler > @markuslanthaler > > [1] http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven +-----------------------------------------------+ Ryan J. McDonough http://damnhandy.com http://twitter.com/damnhandy
Received on Saturday, 15 February 2014 16:50:01 UTC