RE: estimated Content-Length with chunked encoding

Daniel Stenberg wrote:
> On Fri, 14 Nov 2008, Jamie Lokier wrote:
> 
> > What about sending "Expect: 100-continue" in the request headers,
and
> > waiting for a "100 Continue" response.  If you get one, you _ought_
> to be
> > able to assume it's a chunked-request-capable HTTP/1.1 server or
> proxy, and
> > if you don't, you time out, abort that connection (because you don't
> know if
> > it will interpret Transfer-Encoding), and try again with a non-
> chunked
> > request.
> >
> > Does that work in principle, disregarding broken implementations?
> >
> > Does it work in practice?
> 
> I think this is a case that would work if things worked the way
> we understand them and read the RFC, but in practise I believe
> the 100-continue support in the wild is not implemented this
> cleverly. I think a vast amount of 100-continue responses are
> just not doing any checks at all but simply respond OK-continue
> without consideration. And then there's the opposite - servers
> that don't like 100-continue at all but would support a chunked-
> request.
> 
> I say "think" here because this is just my gut feeling, I haven't
> actually tested the theory.

Of the half-dozen servers with data in Mark's http-implementations
Google doc, only one said 'no' in the 'chunked bodies' column.

While we're on the subject, why are the highest-profile servers, like
Apache, lighttpd, and nginx, still unrepresented? Doesn't *anyone* know
how they work anymore?


Robert Brewer
fumanchu@aminus.org

Received on Friday, 14 November 2008 07:32:45 UTC