Re: New Version Notification for draft-vkrasnov-h2-compression-dictionaries-01.txt

2016-11-03 10:23 GMT+09:00 Vlad Krasnov <vlad@cloudflare.com>:
>
>> On Nov 2, 2016, at 5:06 PM, Martin Thomson <martin.thomson@gmail.com> wrote:
>>
>>>
>>> What?
>>
>> HTTP/2 - at the layer Vlad is talking about - only knows about the
>> bytes that comprise a request or response.  What Vlad proposes
>> requires that the HTTP/2 layer look at the Content-Encoding header
>> field, determine that it *ends* with a compatible value and only then
>> do things to feed data to the compressor.  That's a massive
>> architectural upheaval.
>
> I know that it is not entirely "canonical" however even if implemented entirely at the protocol level, http/2 still has to know that no other content-encoding was used at the application layer.

I agree with Martin that using an HTTP header for the purpose is not a
good idea.

One practical issue I am afraid of using Content-Encoding for such
purpose is that the checksum of the transferred file becomes different
from the original, since there is no deterministic way to reconstruct
the compressed form of a gzip. The draft alters some part of the
response (e.g. first 32KB after decompressed), and it is impossible
for a client to reconstruct how it looked like on the origin server
(under the compressed form).

I think this would cause interoperability issues due to the fact that
sometimes the checksum (or a signature) of a transmitted archive is
transferred separately.

Using transfer-encoding header would not cause this issue, but I think
things would become simpler if the identifier of the compression
method being applied was transmitted as an HTTP/2 frame rather than a
HTTP header.

Personally, I wonder if the approach could be adjusted to using a
shared compression context between the DATA frames of multiple
streams. Doing that way, the draft would become more attractive to
other use-cases (e.g. JSON API in microservices) where many tiny
responses are sent very frequently. Even if we changed the draft as
such, a server could still use the trailing part of a pre-compressed
file, by locking down the response being transmitted to that
particular one.

Anyways, I am looking forward to discussing the I-D at IETF, since I
think there'd be value in solving the issue (that the I-D tries to
solve).



> At least from nginx viewpoint this is very simple, but I did not really consider how difficult it might be to implement in FF.



-- 
Kazuho Oku

Received on Thursday, 3 November 2016 01:55:03 UTC