W3C home > Mailing lists > Public > ietf-http-wg@w3.org > October to December 2008

Re: estimated Content-Length with chunked encoding

From: <stefan.eissing@greenbytes.de>
Date: Fri, 14 Nov 2008 08:57:12 +0100 (CET)
Message-ID: <48195.212.93.4.61.1226649432.squirrel@www.greenbytes.de>
To: "Daniel Stenberg" <daniel@haxx.se>
Cc: ietf-http-wg@w3.org

>> Does it work in practice?
>
> I think this is a case that would work if things worked the way we
> understand
> them and read the RFC, but in practise I believe the 100-continue support
> in
> the wild is not implemented this cleverly. I think a vast amount of
> 100-continue responses are just not doing any checks at all but simply
> respond
> OK-continue without consideration. And then there's the opposite - servers
> that don't like 100-continue at all but would support a chunked-request.
>
> I say "think" here because this is just my gut feeling, I haven't actually
> tested the theory.

In my experience, chunked requests are nowadays much better supported than
proper 100-continue behaviour. So it will probably be safe to deduce the
former from the latter, but one misses out on some servers.

The reason for this is that chunked requests can be implemented often in
some quite isolated server code while proper 100-continue needs to reach
up through all API layers (until it crashes into the servlet api for
example which does not support it).

I say "proper" because some server just send out 100-continue whenever
they see an Expect header without consulting authentication or even the
application layer itself.

Cheers, Stefan

> --
>
>   / daniel.haxx.se
>
>
Received on Friday, 14 November 2008 07:56:28 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 27 April 2012 06:50:57 GMT