W3C home > Mailing lists > Public > ietf-http-wg@w3.org > July to September 2008

Re: [google-gears-eng] Re: Deploying new expectation-extensions

From: Julian Reschke <julian.reschke@gmx.de>
Date: Sat, 13 Sep 2008 10:09:13 +0200
Message-ID: <48CB7529.7060006@gmx.de>
To: Brian McBarron <bpm@google.com>
CC: Charles Fry <fry@google.com>, gears-eng@googlegroups.com, Mark Nottingham <mnot@yahoo-inc.com>, Alex Rousskov <rousskov@measurement-factory.com>, HTTP Working Group <ietf-http-wg@w3.org>

Brian McBarron wrote:
> I want to call out one point here.  In our proposal, the server is in 
> full control of the upload URI and the presence/absence of an ETag. 
>  This has several advantages:
> 
> 1) The server can decide whether it is most convenient (from an 
> implementation point of view) to identify operations via a unique URI 
> and/or an ETag.
> 2) Regardless of the two potential mechanisms which can be used, a 
> server only needs to use one.  The server never has to "implement all of 
> them".
> 3) The client or intermediary will always have a URI to deal with, 
> regardless of whether it is unique or not.  Indeed, the client doesn't 
> have enough information to know if a URI is unique.
> 4) Thus, the only added complexity to the system of supporting 
> either/both methods, is that a client or intermediary must correctly 
> forward an ETag when it _optionally_ appears.

OK, so a server can just choose one approach, and for the client it 
doesn't matter.

So the only overhead is in the specification, which, I think, is still 
sub-optimal. I'd really prefer a single way to do that if that doesn't 
conflict with another goal.

> To me, the overhead of (4) is very minor considering the power and 
> flexibility of (1).

Correctly using ETags is something we really should require from parties 
using this protocol (so I wouldn't even consider that an "overhead").

> On another note, I think it's very much worth designing a protocol that 
> is compatible with our original intent, which I summarize below.  While 
> this isn't feasible while intermediaries are not 1.1 compliant, we can 
> hope that will change eventually, enabling this extension to really 
> shine, IMO.
> 
> Assume that a client is conducting requests as normal, with no regard 
> for whether the server supports this extension.  At some point, it makes 
> any non-idempotent request to the server, and based on the 
> characteristics of the request, the server realizes the operation has a 
> probability of failure due to network disconnect.  This could be due to 
> a particularly long running upload, a lossy network connection, or a 
> back-end delay in processing the request (such as a credit card 
> transaction).  Dynamically, the server decides to enable resumability 
> for the request, by pushing down a 103 intermediate response.  If the 
> client is compliant, it will either ignore the 103 if it doesn't support 
> resumability, or it will properly enable support for continuing the 
> request in case of network failure.  In the case of an upload, it means 
> we can continue where we left off.  In the case of a long-running 
> credit-card transaction, it means we can safely poll for the final 
> response code, without the risk of double billing the user.  The benefit 
> is that no additional planning had to be done between the client and 
> server ahead of time, so there is no overhead in regular traffic.

Good summary.

It seems to me that if we can rely on HTTP/1.1 throughout, this is 
relatively easy to achieve. My recommendation is to define the simplest 
possible extension based on HTTP/1.1 (optimal number of optional stuff: 
zero :-), and only then consider extensions/workarounds to get it work 
with HTTP/1.0. -- Speaking of which, for HTTP/1.0 is there anything 
besides the missing 1xx status codes?

Best regards, Julian
Received on Saturday, 13 September 2008 08:10:00 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 27 April 2012 06:50:54 GMT