W3C home > Mailing lists > Public > ietf-http-wg@w3.org > October to December 2008

RE: estimated Content-Length with chunked encoding

From: Robert Brewer <fumanchu@aminus.org>
Date: Thu, 13 Nov 2008 23:34:33 -0800
Message-ID: <F1962646D3B64642B7C9A06068EE1E64059BC86B@ex10.hostedexchange.local>
To: "Daniel Stenberg" <daniel@haxx.se>, <ietf-http-wg@w3.org>

Daniel Stenberg wrote:
> On Fri, 14 Nov 2008, Jamie Lokier wrote:
> 
> > What about sending "Expect: 100-continue" in the request headers,
and
> > waiting for a "100 Continue" response.  If you get one, you _ought_
> to be
> > able to assume it's a chunked-request-capable HTTP/1.1 server or
> proxy, and
> > if you don't, you time out, abort that connection (because you don't
> know if
> > it will interpret Transfer-Encoding), and try again with a non-
> chunked
> > request.
> >
> > Does that work in principle, disregarding broken implementations?
> >
> > Does it work in practice?
> 
> I think this is a case that would work if things worked the way
> we understand them and read the RFC, but in practise I believe
> the 100-continue support in the wild is not implemented this
> cleverly. I think a vast amount of 100-continue responses are
> just not doing any checks at all but simply respond OK-continue
> without consideration. And then there's the opposite - servers
> that don't like 100-continue at all but would support a chunked-
> request.
> 
> I say "think" here because this is just my gut feeling, I haven't
> actually tested the theory.

Of the half-dozen servers with data in Mark's http-implementations
Google doc, only one said 'no' in the 'chunked bodies' column.

While we're on the subject, why are the highest-profile servers, like
Apache, lighttpd, and nginx, still unrepresented? Doesn't *anyone* know
how they work anymore?


Robert Brewer
fumanchu@aminus.org
Received on Friday, 14 November 2008 07:32:45 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 27 April 2012 06:50:57 GMT