- From: Yves Lafon <ylafon@w3.org>
- Date: Wed, 10 Sep 2008 12:59:55 -0400 (EDT)
- To: Jon Butler <jkbutler@google.com>
- cc: Wei-Hsin Lee <weihsinl@google.com>, ietf-http-wg@w3.org
On Wed, 10 Sep 2008, Jon Butler wrote: > Yves-- > > You raise an interesting point. SDCH was designed to be applied to > the message body before gzip (since cross-payload redundancy is much > harder to detect after gzipping the payloads). One of the differences > between the two sets of headers is that transfer encodings must be > applied after and removed before content encodings, since transfer > encodings are properties of the message and content encodings are a > property of the entity inside the message. So, we have a choice: > either we indicate both SDCH and gzip in the Content-Encodings, or > both in the Transfer-Encoding header. Since the prior art for gzip is > to indicate it in the Content-Encoding header (a holdover from the > HTTP/1.0 standard as I understand), we proposed putting sdch there as > well. Hum, there are no reasons to keep gzip only at the Content-Encoding level. In fact, at least one browser uses it. It can have a huge impact in front of a proxy handling TE: gzip, especially for slow networks (like mobile). The fact that gzip is used in Content-Encoding rather than Transfer-Encoding is mostly due to implementations, and implementation may evolve, especially when new products enter the market, so it's better if the specification allows both (and even better if implementation allows both). >> From my reading of the standard, it would be more in keeping with the > HTTP/1.1 standard to put both encodings (gzip and sdch) in the > TE/Transfer-Encoding headers, but it is not clear that it would be > more practical. > > We'd be happy to hear others' opinions on this. > > Jonathan > > > On Wed, Sep 10, 2008 at 4:17 AM, Yves Lafon <ylafon@w3.org> wrote: >> >> On Mon, 8 Sep 2008, Wei-Hsin Lee wrote: >> >>> Hi, >>> >>> Over the last few weeks we've been experimenting with a way to get better >>> compression for HTTP streams using a dictionary-based compression scheme, >>> where a user agent obtains a site-specific dictionary that then allows >>> pages >>> on the site that have many common elements to be transmitted much more >>> quickly. >> >> One question, why using Accept-Encoding/Content-Encoding instead of >> TE/Transfer-Encoding ? >> Cheers, >> >> >> -- >> Baroula que barouleras, au tiéu toujou t'entourneras. >> >> ~~Yves >> >> >> > -- Baroula que barouleras, au tiéu toujou t'entourneras. ~~Yves
Received on Wednesday, 10 September 2008 17:00:33 UTC