W3C home > Mailing lists > Public > ietf-http-wg@w3.org > October to December 2008

Re: estimated Content-Length with chunked encoding

From: William A. Rowe, Jr. <wrowe@rowe-clan.net>
Date: Mon, 20 Oct 2008 19:36:49 -0500
Message-ID: <48FD2421.6080502@rowe-clan.net>
To: Jamie Lokier <jamie@shareable.org>
CC: Henrik Nordstrom <henrik@henriknordstrom.net>, Greg Dean <dean.greg@gmail.com>, ietf-http-wg@w3.org

Jamie Lokier wrote:
> Henrik Nordstrom wrote:
>> On mån, 2008-10-20 at 13:52 -0700, Greg Dean wrote:
>>> Transfer-Encoding: chunked
>>> Estimated-Content-Length: 300000
>>> This would allow for the recipient of such a message to prepare for a
>>> message of a certain size.
>> Looks useful to me, for the reasons you outlined and a bunch more. See
>> for example the NTLM auth chunked discussion a year ago or so..
>> For responses it's meaningful even without chunked.
>> I don't see anything wrong with it, especially not with the length being
>> an estiamte and not the exact expected lenght.
> In some circumstances you may be able to refine the estimate as the
> message is being transmitted.
> Chunk extensions ("chunk-extension") would suit that:
>     1000;estimated-remaining=299000
>     (1000 bytes)
>     1000;estimated-remaining=298000
>     (1000 bytes)
> I don't know if chunk extensions break in the real world, though.

Or, permit

     (1000 bytes)
     (1000 bytes)

My only concern about implementing an 'estimated' extension is what is
your pain tolerance in terms of deviation?  Would a min or max be a better
fit to the problem set?

Given that precomputing the result size is often suboptimal for todays'
assembled responses, but there is a desire to have something to share with
the user, the % notation might work well.  I'm concerned that the estimate
will become an expectation, also costly to precompute, but rarely useful
in practice.
Received on Tuesday, 21 October 2008 00:37:38 UTC

This archive was generated by hypermail 2.3.1 : Thursday, 1 October 2015 05:36:31 UTC