- From: Jamie Lokier <jamie@shareable.org>
- Date: Wed, 14 Nov 2007 19:24:56 +0000
- To: Adrien de Croy <adrien@qbik.com>
- Cc: Bjoern Hoehrmann <derhoermi@gmx.net>, "'HTTP Working Group'" <ietf-http-wg@w3.org>
Adrien de Croy wrote: > This makes it (very unfortunately) a bad idea for a proxy to decide to > try and save bandwith by inserting an Accept-Encoding: gzip header as > mentioned above, since that invalidates the ETag that the client may > have provided. I guess that's why the spec says proxies must not touch > end-to-end headers. A proxy shouldn't do that for the reason you give. But a proxy _is_ allowed to use the TE header instead for the same effect. In that case, the same Etag _should_ be used for compressed (with Transfer-Encoding) and uncompressed representations of the same entity. This is allowed because TE and Transfer-Encoding are hop-by-hop headers, while Accept-Encoding and Content-Encoding are end-to-end headers. A server, or even another proxy in the chain, can still use stored (or cached on the fly) compressed/uncompressed versions, for efficiency as you describe. The details of how a server (or proxy) keeps both versions in sync if it stores both are an implementation detail, outside the scope of the protocol, but it's not complicated or difficult. So you can get the performance you want of not compressing on the fly, at the same time as implementing proxies which auto-compress/auto-decompress by inserting a hop-by-hop request header. -- Jamie
Received on Wednesday, 14 November 2007 19:25:10 UTC