- From: Henrik Nordstrom <henrik@henriknordstrom.net>
- Date: Thu, 15 Nov 2007 02:47:46 +0100
- To: Adrien de Croy <adrien@qbik.com>
- Cc: Bjoern Hoehrmann <derhoermi@gmx.net>, "'HTTP Working Group'" <ietf-http-wg@w3.org>
- Message-Id: <1195091266.30372.63.camel@henriknordstrom.net>
On ons, 2007-11-14 at 19:10 +1300, Adrien de Croy wrote: > Until about 25 minutes ago, I was a proponent of different encodings of > the same resource being treated as the same resource, due to the fact > that the encoding should be lossless. However, this I now believe is > impossible. The reason I haven't seen discussed, so thought I'd toss it > out there. It's been discussed several times already, and is where at least one of the major web servers goof up.. It's very clearly spelled in the RFC. ETag is unique for the variant of the resource, not the resource. Responses with different content-encoding is different variants. > It's impossible to guarantee that changes in the original document can > be made and propagated in step with the (possibly cached) compressed > version. Further, the two is certainly not range compatible, allowing merging of sub-ranges. > This makes it (very unfortunately) a bad idea for a proxy to decide to > try and save bandwith by inserting an Accept-Encoding: gzip header as > mentioned above, since that invalidates the ETag that the client may > have provided. No it doesn't. But it then becomes the proxys responsibility to manage ETag mappings properly or remove ETag entirely, and yes it's generally a bad idea as you break the object identity chain. > I guess that's why the spec says proxies must not touch > end-to-end headers. semantically transparent proxies MUST NOT ... semantically non-transparent proxyes MAY, but then the specs do not really care much about those. It's the implementers obligation to verify that any semantic non-transparency done by the proxy do not violate specifications. Regards Henrik
Received on Thursday, 15 November 2007 01:48:08 UTC