Re: Support for gzip at the server #424

The sender doesn't know the uncompressed size before it transmits?

That seems.. odd...

Right now we have a world where nobody can *ever* use one of the features
of the protocol (compression in the upload path) because of a lack of an
interop bridge between the two worlds.
Given how cheap it is to decompress gzip, this seems strange.
-=R


On Fri, Mar 14, 2014 at 1:49 PM, Michael Sweet <msweet@apple.com> wrote:

> Roberto,
>
> That doesn't work in a lot of situations.  Particularly with IPP, we are
> often dealing with gigabytes of print data and buffering that on the client
> prior to sending it is generally not feasible and causes a poor user
> experience (mainly delayed printing...)
>
>
> On Mar 14, 2014, at 1:42 PM, Roberto Peon <grmocg@gmail.com> wrote:
>
> Implementation experience here says that many servers barf if they don't
> get content-length for an upload.
>
> imho, the simple solution is to mandate the presence of a header that
> indicates the uncompressed content-length when sending compressed data on
> http/2. This is generically useful in many applications in both directions.
> -=R
>
>
> On Fri, Mar 14, 2014 at 10:06 AM, Patrick McManus <pmcmanus@mozilla.com>wrote:
>
>> receiving chunked requests is not supported with a high enough certainty
>> that anyone yet wants to send them in a generic web context. (apps that
>> know their server's capabilities apriori are a different story). HTTP/2
>> negotiation at first seems to make that easier, but supporting easy
>> gatewaying back to http/1.0 makes it hard again :(
>>
>> as much as I would really like this feature, I think its reasonable not
>> to include it.
>>
>>
>> On Fri, Mar 14, 2014 at 5:52 PM, Bjoern Hoehrmann <derhoermi@gmx.net>wrote:
>>
>>> * Martin Thomson wrote:
>>> >On 14 March 2014 02:20, Roland Zink <roland@zinks.de> wrote:
>>> >> ISIZE is in the footer and not the header.
>>> >
>>> >So that leads me back to the original conclusion.  Since
>>> >intermediation from 2 to 1.1 will require the Content-Length and
>>> >extracting that from a gzip'd body would require buffering an entire
>>> >request, I'm inclined to say that this is too hard.
>>>
>>> It would help if you explain why you think Content-Length is needed in
>>> this scenario. `Transfer-Encoding: chunked` is supported by servers and
>>> intermediaries. Likely not perfectly, but if that is the concern we
>>> should make that very explicit.
>>> --
>>> Björn Höhrmann · mailto:bjoern@hoehrmann.de · http://bjoern.hoehrmann.de
>>> Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
>>> 25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/
>>>
>>>
>>
>
> _________________________________________________________
> Michael Sweet, Senior Printing System Engineer, PWG Chair
>
>

Received on Friday, 14 March 2014 21:08:58 UTC