Re: Reminder: Call for Proposals - HTTP/2.0 and HTTP Authentication

On Thu, Apr 26, 2012 at 11:54 PM, Mark Nottingham <mnot@mnot.net> wrote:
> Hi James,
>
> Thanks. Some quick responses below.
>
>
> On 27/04/2012, at 4:43 PM, James M Snell wrote:
>
>> Great to see this work getting underway. I don't have any particular
>> firm proposals from a fundamental HTTP/2.0 messaging and semantics
>> level, but I do have a few items on my wishlist from an HTTP-based API
>> developers point of view that I would like to see addressed in 2.0..
>>
>> Requirements to Consider for HTTP/2.0 from an API Developers Point of View
>>
>> 1. It needs to remain as simple as possible. Right now, when showing
>> someone how to utilize an API, I can simply type:
>>
>>  POST /a/uri HTTP/1.1
>>  Host: example.org
>>  Content-Type: text/plain
>>
>>  Hello World
>>
>>  And give them all the information they need. Whatever the actual
>> transport ends up being, at some level we have to make sure we don't
>> lose this kind of "View Source" visibility.
>
> I think there's a growing feeling that this is Nice To Have, but not required. See recent discussion.
>

So long as the basic operations can still be semantically represented
this way in order to describe the overall structure of the request,
it's fine if it doesn't look this way actually on the wire.

>
>> 2. Allow the Request-URI to be a Request-IRI so no conversion is
>> necessary. E.g. it should be possible to do this at the request level
>> and have it just work...
>>
>>  POST /a/üri HTTP/2.0
>>  Host: éxample.org
>
> That would be interesting, but the effects of such a change would have to be carefully considered.
>

Agreed. It certainly could be quite a disruptive change actually and
carries a number of potential additional security risks, for sure.

>
>>  For that matter, can we allow extended characters in all the headers
>> and use UTF-8 as the default encoding.
>
> It's not clear that we're going to be able to do that, because it requires knowledge of the headers to translate between the different encodings. The benefit would be relatively small for a LOT of work.
>

It may be because it's after midnight and I really should be getting
some sleep, but i don't quite follow what you're saying about
requiring knowledge of the headers to translate between the different
encodings.

>
>> 3. Use ISO-8601/RFC3339 Timestamps for more precise date/time handling
>>
>>  Date: 2012-12-12T12:12:12Z
>>  Last-Modified: 2012-12-12T12:12:12.012Z
>>  Expires: 2012-12-12T12:12:12.123Z
>>  // or even
>>
>>  Expires: P2D3H    (using an ISO-8601 Duration)
>
> The encoding of such values has already been discussed quite a bit, but the focus so far has been on efficiency.
>

So long as it makes it in there "eventually", I'm happy :-)

>
>> 4. It would be helpful to have a "standard" means of signing requests
>> and responses. SSL/TLS is good, but it doesn't always meet the
>> requirement (see OpenSocial Signed Fetch as an example)
>
> That's an interesting topic. So far, our discussions on security have been about TLS vs no TLS, but I'd welcome discussion that had a finer gradation.
>
> Personally, I find signing responses quite interesting, as it would allow clients to assure that they haven't been tampered with (ads inserted, etc.), while giving intermediaries (e.g., virus-sniffing firewalls, caches) some visibility.
>

Precisely. Fortunately, this is one of those types of things that can
be worked up independently of the rest of the more immediate issues of
efficiency (although generating a signature over the request certainly
can impact the efficiency of the request... Just thinking off the top
of my head, assuming we ultimately ended up with something based on
SPDY, we could possibly achieve this by introducing a new SIGNATURE
frame following the final DATA frame (or last SYN_REPLY/SYN_STREAM if
there is no payload). Not sure if that's workable or not, but it's
something that should at least be explored.

>
>> 5. Batched-Requests keep popping up and implementors keep coming up
>> with proprietary ways of handling them (e.g. Facebook, Google,
>> OpenSocial.. and others). The primary reason given is efficiency...
>> doing more stuff in a single request. It would be helpful for HTTP/2.0
>> to definitively address this so we don't keep ending up with a bunch
>> of relatively half-baked vendor specific batching models that attempt
>> to bundle http message semantics inside message payloads.
>
> I think that's addressed by multiplexing, which is part of most proposals we've discussed so far.
>

I certainly hope so... I also hope that it's straightforward enough of
a mechanism to utilize to convince folks that batching of requests
truly isn't necessary... Right now, there are just way too many people
who think it's a perfectly fine thing to stick method names and http
headers and etags all bundled up into a JSON document in a kind of
bastardized pseudo-http-within-http-because-the-real-http-is-kinda-sorta-not-adequate-in-some-way
model.

>
>> 6. Please consider incorporating the Mac and Bearer token
>> authentication mechanisms as standard HTTP authentication schemes.
>
> We need proposals to do this.
>
>
>> 7. Please consider incorporating the PATCH method into the core set of
>> HTTP 2.0 Methods
>
> It already is... see the registry.
>
>
>> 8. Please consider incorporating the Prefer header into the core set
>> of HTTP 2.0 request headers.
>
> It's on a separate track. Note our charter; we're barred from introducing new features, in most cases.
>
>
>> 9. The X-HTTP-Method-Override header has emerged as the de facto
>> standard way of getting around intermediaries that inadvertently block
>> extension http methods (like PATCH). It would be helpful for HTTP/2.0
>> to offer some prescriptive solution so that this kind of
>> Tunneling-through-POST hack isn't necessary any more.
>
> So that misguided implementers can break that as well? Perhaps then we'll have an X-X-I-Really-Mean-It flag?
>

Sadly I share the same concern with regards to the whole multiplexing
and batching discussion. Regardless of that, it would be helpful to at
least explore what, if anything, can be done to make it easier to
bootstrap the use of new methods.

>
>> 10. Currently within HTTP/1.1 the 202 Accepted response says The
>> representation returned with this response SHOULD include an
>> indication of the request's current status and either a pointer to a
>> status monitor or some estimate of when the user can expect the
>> request to be fulfilled" but otherwise does not provide a standardized
>> means of referencing the location of the status monitor or determining
>> whether the asynchronous operation is complete. A variety of means
>> have been proposed but it would be helpful for 2.0 to flesh this out
>> in detail. For instance, a Location header in the 202 response can be
>> used to reference the status monitor; when a user-agent then does a
>> GET on that URL, a 202 response can indicate that the request is still
>> being processed. e.g.
>>
>> // post a long running request //
>> POST /some/resource HTTP/2.0
>> Host: example.org
>>
>> {.. some data to process ..}
>>
>> // get back an asynchronous response //
>> HTTP/2.0 202 Accepted
>> Location: http://.../status-monitor/1
>> Retry-After: 120
>>
>> // check the status 120 seconds later //
>> GET /status-monitor/1 HTTP/2.0
>> Host: ...
>>
>> // response is not yet completed
>> HTTP/2.0 202 Accepted
>> Location: http://.../status-monitor/1
>> Retry-After: 120
>>
>> // check the status 120 seconds later //
>> GET /status-monitor/1 HTTP/2.0
>> Host: ...
>>
>> // processing is complete, server returns a redirect to the actual resource
>> HTTP/2.0 302 Found
>> Location: http://.../the/resource
>>
>> The use of the 2xx status code rather than 3xx avoids the potential
>> for an endless redirect loop with user-agents that blinding follow
>> unknown/unrecognized redirection codes. Also, if you consider the
>> nature of the status monitor, a notice to the client that the
>> processing is not yet complete is a valid success response.
>>
>> Specifying this kind of behavior out in detail will allow asynchronous
>> operations to be deployed in an interoperable and reliable way.
>
>
> That's out of scope for this work, I think.

Very well. Perhaps it would be worthwhile for me to write something up
on this independently then.

>
> Cheers,
>
> --
> Mark Nottingham   http://www.mnot.net/
>
>
>

Received on Friday, 27 April 2012 07:33:00 UTC