Re: A proposal for Shared Dictionary Compression over HTTP

Hi,

quoting from page 2:

> Existing techniques compress each response in isolation, and so cannot take advantage of
> cross-payload redundancy. For example, retrieving a set of HTML pages with the same
> header, footer, inlined JavaScript and CSS requires the retransmission of the same data
> multiple times. This paper proposes a compression technique that leverages this crosspayload
> redundancy.

That is true, but isn't a *much* simpler approach not to inline these
data chunks? If they are served as separate resources, the can be
individually cached and need not to be retrieved multiple times...

Another concern is that the way it's currently specified, HTTP responses 
lose the property of being completely self-contained; one way to fix 
that would be to always return the set of applied dictionaries.

Finally, there's a concern with the IPR status of vcdiff, see 
<https://datatracker.ietf.org/ipr/search/?option=rfc_search&rfc_search=3284> 
and 
<http://lists.w3.org/Archives/Public/ietf-http-wg/2004AprJun/0086.html>, 
in which Roy said:

> Servers SHOULD NOT support the VCDIFF format until its IP restrictions
> are clarified and made available royalty-free for all uses of HTTP,
> at a minimum, and not just use within HTTP/1.1 as defined in 2001.

BR, Julian

Received on Tuesday, 9 September 2008 07:45:44 UTC