W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2008

Re: [google-gears-eng] Re: Deploying new expectation-extensions

From: Jamie Lokier <jamie@shareable.org>
Date: Mon, 7 Apr 2008 17:25:43 +0100
To: Henrik Nordstrom <henrik@henriknordstrom.net>
Cc: Adrien de Croy <adrien@qbik.com>, Charles Fry <fry@google.com>, Julian Reschke <julian.reschke@gmx.de>, Brian McBarron <bpm@google.com>, google-gears-eng@googlegroups.com, Mark Nottingham <mnot@yahoo-inc.com>, HTTP Working Group <ietf-http-wg@w3.org>
Message-ID: <20080407162543.GA2220@shareable.org>

Henrik Nordstrom wrote:
> > > I think until we adopt proper handling of uploads (i.e. pre-authorised / 
> > > negotiated etc) we'll have problems - esp with large uploads and auth.  
> > > But there I go flogging that poor dead horse again...
> 
> 100 Continue + chunked encoding accomplishes this quite well, allowing
> for any length of negotiation before the actual upload is sent. It's not
> the specs fault these features haven't been properly adopted.

Breaking with HTTP/1.0 proxies and servers is quite a good reason not
to use chunked requests for general purpose HTTP over "the internet".

I don't buy the argument that once you've seen a HTTP/1.1 response
from a domain, you can assume it's a HTTP/1.1 server and proxy chain
for all future requests to that domain.  It's very likely, but not
reliable.  Proxy routes change, reverse proxies route requests to
different servers depending on URL, etc.

As a result, there hasn't been a perceived need or any testing of
chunked requests to servers, and even today, some otherwise good
HTTP/1.1 servers don't support chunked requests.

-- Jamie
Received on Monday, 7 April 2008 16:27:04 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 27 April 2012 06:50:46 GMT