Re: PROPOSAL: Weak Validator definition [i101]

Nothing comes to a "grinding halt" if we don't cache these responses. 
You say the likeliness is high that a same second change is small, I say 
the likeliness is high that the change is significant.

If weak etags mean "it was the only feasible way to generate one" then I 
would advocate to stop using them client side. They just degrade cache 
transparency with no real benefit.

If I hit five times the "reload" button of my browser, what should 
happen? Should there be space for the possibility to see the wrong page 
because the browser used a weak etag in a conditional request?


Robert


On Tue, Mar 18, 2008 at 03:12:25PM +0100, Henrik Nordstrom wrote:
> 
> On Sun, 2008-03-16 at 16:23 +0100, Robert Siemer wrote:
> 
> > To see no useful weak etag implementations within the static file 
> > serving code among common servers does not surprise me at all. - How 
> > should they know about semantic equivalence?
> > 
> > I still don't know why this mecanism has to be an illusion. 
> 
> It's an illusion only because the meaning of rough semantic equivalence
> (or no significant change in semantics) isn't defined in technical terms
> and means different things to different people, and can not be enforced
> by the protocol. 
> 
> But this does not make weak validators a useless feature by any means.
> It's a very interesting aspect of HTTP. Just means we need to get the
> language cleaned up so people do not get so confused on what the specs
> really means.
> 
> In the spec conditions using weak etags is placed pretty much equal to
> conditions based on Last-Modified which has no semantic guarantee at
> all, even if one MAY deduce some strength from modification time when
> some time has passed.
> 
> Weak validators (and weak etags in specific) is a fuzzy feature of the
> protocol which can be used for many interesting applications. But the
> only "semantic equivalence" that can really be counted on is that a
> strong ETag guarantees equality down to the octet, and that a weak etag
> (or a condition based on only Last-Modified) signals that the object
> most likely have not changed meaning in any significant manner.
> 
> The problem with "semantic equivalence" is a language thing, defining
> "semantic equivalence". What the spec means to say is basically this
> (from "Entity tags")
> 
>    A "weak entity tag," indicated by the "W/" prefix, MAY be shared by
>    two entities of a resource only if the entities are equivalent and
>    could be substituted for each other with no significant change in
>    semantics. A weak entity tag can only be used for weak comparison.
> 
> or this from "Weak and strong validators"
> 
>    However, there might be cases when a server prefers to change the
>    validator only on semantically significant changes, and not when
>    insignificant aspects of the entity change. A validator that does not
>    always change when the resource changes is a "weak validator."
> 
> not strict "semantic equivalence" as it mistakenly says in other
> sections or paragraps.
> 
> What has got people upset about weak etags is the (in my opinion) valid
> assumption taken by Apache and a few other implementations that if an
> object changes twice in the same second it's most likely not a
> significant change in semantics. Which isn't a bad assumption, but also
> one which can not be guaranteed to 100%, and because it can not be
> guaranteed it must not be used, right? (sure... never heard of a weak
> condition which can not be guaranteed have you..)
> 
> The world is full of such things which fall into the "most likely"
> category, and if we did not make use of such assumptions when it makes
> sense then most things would take a grinding halt, not only HTTP..
> 
> Regards
> Henrik
> 
> 

Received on Tuesday, 18 March 2008 21:38:21 UTC