Martin,
I just pointed this use case out as an argument *against* requiring a new header providing the uncompressed length. HTTP/2.0 *does* effectively require chunked message body support, just like HTTP/1.1, so any interoperability issues need to be fixed by the implementors and we should not require a client or a server to provide a content length, period. It is useful information if provided, but the onus is on the receiver to deal with content that exceeds its capabilities or configured limits, not on the protocol/transport providing it.
On Mar 14, 2014, at 5:21 PM, Martin Thomson <martin.thomson@gmail.com> wrote:
> On 14 March 2014 14:15, Michael Sweet <msweet@apple.com> wrote:
>> The client is generally rasterizing pages of content for the printer at some
>> agreed upon resolution, bit depth, and color space. This raster data is
>> typically already compressed with a simple algorithm such as PackBits
>> (run-length encoding) and is thus already variable-length per page with no
>> way to know ahead of time how large it will be. Add gzip to the mix and you
>> *really* don't know what the final length will be.
>
> This seems like a case of "I know the server capabilities well enough
> to do this". I'm not sure that we could safely do the same thing for
> every HTTP client in existence.
>
_________________________________________________________
Michael Sweet, Senior Printing System Engineer, PWG Chair