W3C home > Mailing lists > Public > ietf-http-wg-old@w3.org > September to December 1995

Re: Confusion over caching (was Re: Logic Bag concerns)

From: Roy T. Fielding <fielding@avron.ICS.UCI.EDU>
Date: Sun, 10 Dec 1995 17:34:43 -0800
To: Shel Kaphan <sjk@amazon.com>
Cc: http-wg%cuckoo.hpl.hp.com@hplb.hpl.hp.com
Message-Id: <9512101734.aa20718@paris.ics.uci.edu>
>   A separate validator field would have
 >> to be generated by all servers for all cachable resources, consisting
 >> of an opaque value which is only usable for metadata comparison (i.e.,
 >> it does nothing to ensure that the entity received is the same as that
 >> sent by the origin server).  It requires that the server be capable and
 >> willing to generate this opaque validator even when the entity is
 >> not directly controlled by the server.
 >> 
> 
> I think it would be helpful if you would explain these claims rather
> than just claiming them.  Yes, the header would need to be present for
> any cachable resource (except for backwards compatibility with 1.0).

Which means that no 1.0 resource (or script designed for 1.0) can
generate something useful for cache validation.  Given the presence of
hierarchical caching, this is sufficient to reject the special-purpose
case as not fulfilling the requirements for HTTP/1.1.

> But why do you say it is only usable for metadata comparison?  If a
> part of a server is configured to use algorithm X to determine its own
> stated content-validator, then that part of the server must be able to
> respond to requests that use content-validators as generated by
> algorithm X, no?  And isn't it only the origin server that has to
> worry about generating these headers?  

No and no.  Only the recipient can test for message integrity of the
message received, and to do so they need to know the algorithm used
to generate the validator.  If the validator is something useful, like
Content-MD5 or Content-SHA or Content-Checksum or even Content-Length,
then it can be used for both message integrity checks AND validation,
which means you don't duplicate information supplied for the special case.

 >> In contrast, IF does not make any assumptions or special requirements
 >> on the information being compared.  If an opaque value is available,
 >> then it can be compared.  If an MD5 is available, then it can be
 >> used as both an MD5 checksum and for cache validation.  If any
 >> useful metainformation (as judged by the client) is available, then
 >> it can be used within a comparison.
 >> 
> 
> The point of the opaque validator is to remove the smarts from the
> client side.

No.  The point is to provide reliable validation.  There is no reason
why this cannot be done just as easily and just as reliably within an
extensible syntax, and with whatever validation-capable metainfo is
present in any given cachable entity, as it would be to do so for just
a special case.  Therefore, the special case loses.

> It really seems like there are multiple issues being
> discussed at once, which should be being discussed separately:
> 
> 1. What are the foreseeable "high-level" reasons for doing conditional
> requests, and how should those conditional requests be encoded in the
> protocol?  We have yet to see a plausible scenario that demonstrates
> this need.  Without stated requirements this seems like an exercise in
> futility.

I have already provided several.  As far as I am concerned, you must
prove that they are not plausible, since the solution provided does
satisfy the needs of opaque validation.  Your requirements are fulfilled
by a general syntax, my requirements are not fulfilled by a special-case
syntax, and therefore the only reasonable design is the general case.

> 2. Is there a requirement or benefit of having a general case solution
> to this that outweighs its complexity and the difficulty of specifying
> the semantics exactly?  General case solutions are nice, where there
> is a general case problem to be solved, but the added hair of having
> to put an expression parser in at this level seems quite questionable
> without a definite need.

I have already answered this question twice.  There is no semantic
ambiguity and no additional complexity if reasonable constraints are
placed on the set of required expressions.

> 3. Should we mix the high level mechanism with the low-level
> cache-integrity mechanisms?  What are the benefits/costs of that?

Irrelevant -- both represent the same semantics for interpreting
the request, and therefore are at the same level within HTTP.

 >> Most importantly, we don't have to specify the interaction between
 >> N types of preconditions if we only use one precondition field.
> 
> Doesn't backwards compatiblity already imply that this is required?

No, it doesn't -- allowing additional expressions does not change
the semantics of IF.  Using separate header fields for every precondition
does change the semantics of interpreting the request for each additional
field.  I KNOW THIS to be true because I've written and rewritten the HTTP
specification over 60 times now and can see this effect every time a new
request header field is added.

 ...Roy T. Fielding
    Department of Information & Computer Science    (fielding@ics.uci.edu)
    University of California, Irvine, CA 92717-3425    fax:+1(714)824-4056
    http://www.ics.uci.edu/~fielding/
Received on Sunday, 10 December 1995 17:52:01 EST

This archive was generated by hypermail pre-2.1.9 : Wednesday, 24 September 2003 06:31:37 EDT