W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2017

Re: Inter-Stream Compression and Delta Encodings

From: Martin Thomson <martin.thomson@gmail.com>
Date: Tue, 25 Apr 2017 11:43:45 +1000
Message-ID: <CABkgnnUGGBHZE-OgkMtTeLPpKftQpH07D9ws7qWst7M_UZVcOw@mail.gmail.com>
To: Patrick McManus <mcmanus@ducksong.com>
Cc: HTTP Working Group <ietf-http-wg@w3.org>
On 25 April 2017 at 11:07, Patrick McManus <mcmanus@ducksong.com> wrote:
>  Pro: saves lots of bytes and serialization time (vlad had data on that you
> can find in the ietf 97 meeting materials). arguably also fixes a regression
> from h1 where h2 discourages inlining which results in less efficient
> content-encodings.

This is overstating the case.  The point is that h2 doesn't do enough
to encourage servers to split resources.  This helps provide more
incentive to do that, by helping to remove the compression cost that
comes with that.

We also have to consider the alternative designs in this space (SDCH
primarily), which have some different trade-offs.

Leaving aside the security concerns (which I think still need more
work) I find the current design as proposed by Vlad to be a little too
complex.

This proposes both orthogonal version negotiation and compression
negotiation on top of the feature negotiation that uses the setting.
Version negotiation can be implied by the setting, and a single
compressor (brotli seems like a reasonable choice) is much easier to
manage.

The HTTP/1.1 mapping is complicated by the need to add a header field
with the same sort of complexity as the h2 setting.  In comparison,
SDCH just uses Accept-Encoding.  SDCH also performs better for
short-lived connections by virtue of relying on the cache.

The size attribute on SET_DICTIONARY adds the need to truncate data
(but what data?  from the start of the stream or from the point at
which the frame is received).

Why does the client send SET_COMPRESSION_CONTEXT?  I mean, how can it
know that different resources are compressible together without
actually knowing what they contain?

I see why you might want to specifically suppress compression on a
given request, but why provide two different ways to do this?  I
understand that one prohibits compression and the other prohibits both
compression and reference, but it's hard to understand why you would
benefit from not compressing a stream that is later used as a
reference.

Ordering is underspecified here.  If two concurrent streams reference
the same dictionary, how are interleaved DATA frames accumulated into
the dictionary?

This extension is a great example of something in h2 that wouldn't
work in QUIC.  Cross-stream ordering guarantees are non-existent in
QUIC.  It would be nice to understand what a design for QUIC would
look like.
Received on Tuesday, 25 April 2017 01:44:20 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:15:03 UTC