W3C home > Mailing lists > Public > ietf-http-wg@w3.org > July to September 2008

Re: [google-gears-eng] Re: Deploying new expectation-extensions

From: Brian McBarron <bpm@google.com>
Date: Fri, 12 Sep 2008 10:53:43 -0400
Message-ID: <f478254d0809120753r7d3be634l88993f86bdf71f6f@mail.gmail.com>
To: "Julian Reschke" <julian.reschke@gmx.de>
Cc: "Charles Fry" <fry@google.com>, gears-eng@googlegroups.com, "Mark Nottingham" <mnot@yahoo-inc.com>, "Alex Rousskov" <rousskov@measurement-factory.com>, "HTTP Working Group" <ietf-http-wg@w3.org>
On Tue, Jul 22, 2008 at 2:47 PM, Julian Reschke <julian.reschke@gmx.de>wrote:
>
> ETags are good. No problem with that.
>
> That being said, I liked Roy's proposal in where the resumable request gets
> a separate URI assigned, exposed in the Location header. Maybe I read
> something the wrong way, or maybe I'm confusing what the two documents
> propose, but wouldn't it be best to always use that design?
>
> The problem in offering widely differing ways to do the same thing is that
> at least in the beginning, clients and servers would need to implement all
> of them in order to achieve some kind of interoperability.  It would be
> great if this could be avoided.


I want to call out one point here.  In our proposal, the server is in full
control of the upload URI and the presence/absence of an ETag.  This has
several advantages:

1) The server can decide whether it is most convenient (from an
implementation point of view) to identify operations via a unique URI and/or
an ETag.
2) Regardless of the two potential mechanisms which can be used, a server
only needs to use one.  The server never has to "implement all of them".
3) The client or intermediary will always have a URI to deal with,
regardless of whether it is unique or not.  Indeed, the client doesn't have
enough information to know if a URI is unique.
4) Thus, the only added complexity to the system of supporting either/both
methods, is that a client or intermediary must correctly forward an ETag
when it _optionally_ appears.

To me, the overhead of (4) is very minor considering the power and
flexibility of (1).

On another note, I think it's very much worth designing a protocol that is
compatible with our original intent, which I summarize below.  While this
isn't feasible while intermediaries are not 1.1 compliant, we can hope that
will change eventually, enabling this extension to really shine, IMO.

Assume that a client is conducting requests as normal, with no regard for
whether the server supports this extension.  At some point, it makes any
non-idempotent request to the server, and based on the characteristics of
the request, the server realizes the operation has a probability of failure
due to network disconnect.  This could be due to a particularly long running
upload, a lossy network connection, or a back-end delay in processing the
request (such as a credit card transaction).  Dynamically, the server
decides to enable resumability for the request, by pushing down a 103
intermediate response.  If the client is compliant, it will either ignore
the 103 if it doesn't support resumability, or it will properly enable
support for continuing the request in case of network failure.  In the case
of an upload, it means we can continue where we left off.  In the case of a
long-running credit-card transaction, it means we can safely poll for the
final response code, without the risk of double billing the user.  The
benefit is that no additional planning had to be done between the client and
server ahead of time, so there is no overhead in regular traffic.
Received on Friday, 12 September 2008 14:54:28 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 27 April 2012 06:50:54 GMT