Re: estimated Content-Length with chunked encoding

David Morris wrote:
> 
> Either % completed or estimated remaining requires computing an estimate
> of the total data to be transfered. I don't see a difference in the impact
> of either choice on a server. I think there are many cases where a
> generated result can be estimated to a 95%+ accuracy, but not to the exact
> size needed for content-length. There is no incentive to do it today, but
> if it could be utilized to improve the user's experience, many web
> application developers would be happy to do so. In addition, a jsp/php/asp
> engine could even watch the actual size generated for each request and
> recognize some pages with a small standard deviation in generated size.
> Use that value automatically.

The example I had in my head was more oriented to work required.  E.g. if
this is a database access or map/reduce problem, representing what % of
the records had been searched (with an understanding that the amount of
bytes then associated with individual records, the actual amount of work
to be performed per record, or the number of actual record matches that
remain can vary widely).  Still, humans seem to want some measure of how
much might still remain to be completed.

> I'd rather see raw sizes from a recipient's perspective as I might
> be able manage resources better. A percentage is only useful for end
> user presentation without interpolating from the amount of data already
> received. Raw numbers make computation of a percentage trivial while more
> easily supporting other use cases.

My point is that managing local resources by imaginary numbers is probably
an exercise in futility.  Again, what is your threshold for pain?  10x an
allocation that you didn't consume?  Or underestimating the data requirements
by a factor of 10?  These have to be understood before a space-oriented false
advertisement is announced to the client.

Received on Tuesday, 21 October 2008 01:49:16 UTC