Re: Choosing a header compression algorithm

Hiya,

I've a question. It might be silly or maybe just one to
punt on for later, but just in case...

If HTTP/2.0 does header compression, and if some form of
header authentication (e.g. a DKIM-like thing, as recently
proposed for iSchedule) were to be standardised, should
the authentication cover the compressed or uncompressed
headers?

The former would seem to be bad when considering APIs,
but the latter might mean that canonicalisation needs to
be considered when picking a compression alg.

The kind of canonicalisation requirement might be that
you need to ensure one can define a reasonable c14n
function such that c14n(X)=c14n(uncompress(compress(X)).

Mark's item 3 below triggered this, I guess one could
argue that it might be a requirement for compression
that it not break higher level canonicalisation which
isn't quite the same as being able to reconstitute the
semantics. (For example, with timestamps that specify
a zero TZ offset, or list ordering maybe.)

Ta,
S.

PS: Apologies if this is all obvious when one reads
the algorithm descriptions, which I've not;-)


On 03/21/2013 07:11 AM, Mark Nottingham wrote:
> Previously, we've talked about starting with just a delta-encoding approach for our first implementation draft. In Orlando, we focused primarily on two proposals:
> 
> * Delta2
>   Draft: http://tools.ietf.org/html/draft-rpeon-httpbis-header-compression-03
>   Python Implementation: https://github.com/http2/compression-test/tree/master/compressor/delta2
> 
> * HeaderDiff
>   Draft: http://tools.ietf.org/html/draft-ruellan-headerdiff-00
>   Python Implementation: https://github.com/http2/compression-test/tree/master/compressor/headerdiff
> 
> As I understand it, Herve et al want to work on making HeaderDiff more resistant to CRIME, and hopefully we'll see the results of that in the very near future. 
> 
> In the meantime, I'd like everyone to become familiar with both drafts and the characteristics of their implementations, so that we can have an informed discussion of them.
> 
> I'd like to see a few things happen while we do this:
> 
> 1) We need to do apples-to-apples comparison of these compressors to see how they behave under a range of constraints (especially, memory).
> 
> 2) I'd like us to verify that they are respecting those constraints, and that they're implemented in an equivalent way (this is likely to be manual).
> 
> 3) It would be very good to have a test suite that verifies that they correctly reconstitute the semantically significant parts of the headers; in particular, large/unusual values, ordering where appropriate, etc. Our current header corpus undoubtedly has holes in this regard.
> 
> If you make any progress along these lines (dare I ask for volunteers?), pleas share with the list.
> 
> Looking at our issues list, this is one of the major items preventing us from getting to a first implementation draft, so I'd like to chose a way forward soon -- especially since we're choosing a starting point, and the approach we take can evolve or, if necessary, be replaced.
> 
> Regards,
> 
> --
> Mark Nottingham   http://www.mnot.net/
> 
> 
> 
> 
> 

Received on Thursday, 21 March 2013 15:41:07 UTC