Re: A proposal for Shared Dictionary Compression over HTTP

Indeed; e.g.,
   http://www.mnot.net/javascript/hinclude/
I use this extensively on my own site, and AFAICT several others are  
as well.

WRT doing it at the transport level -
   http://www.cs.washington.edu/homes/djw/papers/spring-sigcomm00.pdf

Schemes like this are already implemented in a number of "WAN  
optimisation" devices (e.g., Riverbed, Silver Peak, AcceleNet).

Cheers,


On 09/09/2008, at 10:55 AM, Brian Smith wrote:

>
> Ian Hickson wrote:
>> On Mon, 8 Sep 2008, Wei-Hsin Lee wrote:
>>>
>>> We have a paper that we wrote to describe this idea, which
>>> I have put online here: http://groups.google.com/group/SDCH
>>
>> For those of you not used to Google Groups, the paper is here:
>>
> http://sdch.googlegroups.com/web/Shared_Dictionary_Compression_over_HTTP.pdf
>
> It seems to me that AJAX can be used to solve this problem in a  
> simpler
> manner. Take Gmail for example--it downloads the whole UI once and  
> then uses
> AJAX to get the state-specific data. The example from the PPT showed  
> a 40%
> reduction in the number of bytes transmitted when using SDCH (beyond  
> what
> GZIP provided) for google SERPs. I bet you could do about that well  
> just by
> AJAXifying the SERPs (making them more like GMail) + using regular  
> HTTP
> cache controls + using a compact, application-specific data format  
> for the
> dynamic parts of the page + GZIP. Maybe Google's AJAX Search API  
> already
> does that? In fact, you might not even need AJAX for this; maybe  
> IFRAMEs are
> enough.
>
> I also noticed that this proposal makes the request and response HTTP
> headers larger in an effort to make entity bodies smaller. It seems  
> over
> time there is an trend of increasingly large HTTP headers as  
> applications
> stuff more and more metadata into them, where it is not all that  
> unusual for
> a GET request to require more than one packet now, especially when  
> longish
> URI-encoded IRIs are used in the message header. Firefox cut down on  
> the
> request headers it sends [2] specifically to increase the chances  
> that GET
> requests are small enough to fit in one packet. Since HTTP headers are
> naturally *highly* repetitive (especially for resources from the same
> server), a mechanism that could compress them would be ideal.  
> Perhaps this
> could be recast as transport-level compression so that it could be  
> deployed
> as a TLS/IPV6/IPSEC compression scheme.
>
> Regards,
> Brian
>
> [1] http://www.whatwg.org/specs/web-apps/current-work/#offline
> [2] https://bugzilla.mozilla.org/show_bug.cgi?id=309438
>    https://bugzilla.mozilla.org/show_bug.cgi?id=125682
>
>
>

--
Mark Nottingham       mnot@yahoo-inc.com

Received on Friday, 12 September 2008 03:48:31 UTC