W3C home > Mailing lists > Public > http-caching-historical@w3.org > February 1996

Re: Comments on cachedraft.txt

From: Roy T. Fielding <fielding@avron.ICS.UCI.EDU>
Date: Fri, 02 Feb 1996 02:56:22 -0800
To: http-caching@pa.dec.com
Message-Id: <9602020256.aa07930@paris.ics.uci.edu>
More comments -- sorry I can't make it to the meeting.  Could someone
please print it out for the attendees?

General comment:  The purpose of the subgroup is to produce a list
of proposed changes to the 1.1 draft, not to produce a separate draft.
It is not acceptable for required caching issues to be separate from the
main protocol specification (unlike content negotiation and AA, which could
become separate drafts).

>        The proposed design uses opaque cache validators and
>        explicit expiration values to allow the server to control
>        the tradeoff between cache performance and staleness of the
>        data presented to users.  The server may choose to ensure
>        that a user never unwittingly sees stale data, or to
>        minimize network traffic, or to compromise between these
>        two extremes.  The proposed design also allows the server
>        to control whether a client sees stale data after another
>        client performs an update.

This is an incorrect design for HTTP caching.  The cache does not exist
on behalf of the origin server, and therefore any requirements placed
by the origin server will always be secondary to those of the user.

>   Server-based control is also important because HTTP may be used for a
>   wide variety of ``applications.''  The design of a Web application
>   (for example, a stock-trading system) may be peculiar to the server,
>   while Web browsers are generic to all Web applications.  Because the
>   precise behavior of an application cannot be known to the implementor
>   of a browser, but can be controlled by the implementor of a server,
>   servers need to have the option of direct control over the caching
>   mechanism.  Because the world is not perfect, we also need to give
>   users and browsers some control over caching, but this is at best a
>   contingency plan.

This is an incorrect assumption.  The server is not capable of knowing
the needs of the user, and it is the needs of the user that take precedence
in the design of the WWW -- any other ordering results in systems that
purposely defy the design in order to satisfy the user's needs.
Therefore, the caching model MUST be defined according to the user's needs
and only allow the server to provide input into the decisions made to
satisfy those needs.  This allow's the user to decide what is and is not
correct behavior.

>2.2. Definitions ...

>   valid           A cached entity is valid, with respect to a given
>                   request at a given time, if it is exactly what the
>                   origin server would return in response to that
>                   request at that time.
>   invalid         A cached entity is invalid, with respect to a given
>                   request at a given time, if it is not exactly what
>                   the origin server would return in response to that
>                   request at that time.

These definitions are incorrect.  The correct definitions are

    valid           A cached entity is valid, with respect to a given
                    request at a given time, if it corresponds to the
                    resource and method requested (i.e., shares the same
                    cache key) and
                      a) is what the user expects to receive from the
                         request as defined by the explicit requirements
                         placed by the user on cache behavior or within
                         a private cache's configuration; or,
                      b) barring conflicting requirements from the user,
                         fits within the scope of any explicit requirements
                         placed on the cached entity by the origin server
                         in order to control cache behavior; or,
                      c) barring conflicting requirements from the user
                         or origin server, fits within the scope of the
                         requirements placed on the cache by the cache
                         maintainer, including any requirements defined
                         by heuristic evaluation of the cached entity; or
                      d) is exactly what the origin server would return
                         in response to that request at that time.

    invalid         Not valid.

>   expiration time The time at which an entity should not longer be
>                   returned by a cache without first checking with the
>                   origin server to see if the cached object is valid.
>                   All cached objects have an expiration time, but this
>                   might be ``Forever'' (infinitely in the future),
>                   ``already'' (in the past), or ``undefined.''

    expiration time The time beyond which a cached entity should not be used
                    in response to a request unless the user has explicitly
                    set a requirement that the cache disregard its
                    expiration status or the cache is unable to check the
                    expired entity's validity with the origin server due
                    to service interruption.  Cached object's without an
                    expiration time assigned by the origin server should
                    be assigned one by the recipient cache; the assigned
                    expiration time may be a heuristic function of the
                    cached entity's characteristics and/or a configurable
                    parameter assigned by the cache maintainer.

>2.3 Goals
>   This depends on a clear definition of what it is to be ``correct.''
>   I will start with this definition:
>      A response returned by a cache in response to a request is
>      correct if and only if it includes exactly the same entity
>      headers and entity body that would be returned by the origin
>      server to the same client making the same request at the same
>      time.

That definition is just plain wrong.  A response returned by a cache
is correct if the response is valid as I defined above -- any cache
that disregards the requirements of the user in favor if the requirements
of the origin server is INCORRECT when those requirements differ.

For example, if I set up my user agent's cache such that it is in the
"never verify cache" mode, I am doing so for a reason.  In my
case, the reason is usually because I have preloaded the cache for use
during a presentation and it would ruin the timing of the presentation
if the cache did any validity checks.  Whether or not the response is
the same as what would be returned by the origin server at that time
is irrelevant to the correct operation of the cache.

Likewise, it is possible for a public cache maintainer to know ahead of
time when an often-used resource will be off-line for maintenance;
the cache maintainer must be able to set up appropriate behavior for
that situation without being in violation of the protocol.

>   Another way to look at correctness is that:
>      A user should never have to take explicit action to see exactly
>      the version of an entity that the server wants the user to see.
>      The caches are semantically transparent to the user, if both
>      the user and the server agree that they should be.

No -- that is the definition of what the default behavior of a cache
should be given no other conflicting instructions by the user.

>   Users should not have to use ``Reload'' buttons to simply assure
>   themselves that they are seeing valid copies.  One will still have to
>   use ``Reload'' to refresh the page currently on your screen, but only
>   to indicate to the user agent that you have finished reading it and
>   are willing to see a new copy, if any.

This is wrong and does not belong in the discussion.  The Reload semantics
are exactly that the user wishes to retrieve a firsthand copy from the
origin server -- the reason for doing so may have nothing to do with
normal cache behavior.

>2.5 Cache validation and the ``immutable cache'' concept
>   To restate this: a cache may not return a stale entry without first
>   checking its validity.

Wrong, as described above.

>   This differs from the language in section 10.19 of the draft HTTP/1.1
>   specification [1], which says ``Applications must not cache this
>   entity beyond the [Expires:] date given.''  That section should be
>   reworded to say ``Applications must not return this entity from their
>   cache after the date given, without first validating it with the
>   origin server.''  This changes the spec to make it clear when a cache
>   must use a conditional method before responding to a request.

No.  The act of validating an entity changes that entity; regardless of
the result of validation, the cache will not be returning the cached entity
unless there is a service interruption or the user has requested that it
ignore the expiration status.

>      Note: the current HTTP/1.1 draft says that there is no such
>      thing as a ``conditional HEAD'' request.  Do we want/need to
>      retain this restriction?  Or is there some possible value to a
>      conditional HEAD?

There is no value in it.  However, the spec may be changed to just say there
is no value in it and leave the implementation to the server.

>2.6 Opaque validators
>   A proposal was made in [1] to allow the client to do the
>   cache-validity check using a general predicate on all header fields
>   (the ``logic bag'') approach.  While this avoids some of the problems
>   with If-modified-since, it still does not give the server full
>   flexibility, since the choice of how to determine validity is
>   essentially left up to the client.  This also makes the
>   implementation of logic bags mandatory, and such an implementation
>   could be complex or slow.  While logic bags may be useful for other
>   purposes, they seem inappropriate for checking cache validity.

Nonsense -- there does not exist any design for checking cache validity
which can be more efficient than using logic bags, period.  The only
difference is whether the origin server can choose which attributes are
to be used in an equality check.  As I said before, this can be
accomplished by defining the order in which attributes are to be
preferred as the arguments to a precondition, or by having the origin
server explicitly state which one to use via the cache-control header, e.g.

     Cache-Control: public, validator=Content-ID

Creating a new header field for the sole purpose of cache validation
is just a waste of bytes. IF we are going to waste those bytes, then
I will insist that the header be Content-ID, since that will at least
provide some added value and already has well-defined semantics.

>   The server can use any algorithm it chooses to decide if the
>   validator indicates that the cached object is still valid.  It need
>   not use simple equality comparison; for example, the server may
>   decide that although the cached entity is not exactly the same as the
>   current copy, it is close enough for any practical purpose.  In most
>   cases, however, an equality comparison should suffice.

Since the server is not necessarily the origin server, and the validator
is opaque to all but the origin server, an equality comparison must be
the only method of comparison.

>   If a server wants to defeat caching, it can return a cache-validator
>   value that it will never accept as valid.  This might not be the most
>   polite way to defeat caching, since it causes the cache to allocate
>   space for a cached object that will never be useful; see section 2.10
>   for an alternative.

That paragraph does not belong in any specification.

>   A client may use the null value in a request to ensure that a cache
>   does not return a cached response (but we also define an explicit
>   mechanism for this purpose in section 2.10).

NO -- there shall be only one new mechanism for this purpose, and its called
Cache-Control.  Do not introduce irrelevant garbage when it isn't necessary.
Its bad enough that we have to send Pragma headers as well.  Just require
that the validator must not be null.

>2.8 Cache operation when receiving errors or incomplete responses
>   A cache that receives an incomplete response (for example, with few
>   bytes of data than specified in a Content-length: header) MUST not
>   cache the response or any part of it.
>      XXX We need to come up with rules for whether to cache
>      responses with statuses other than
>         - 200 OK
>         - 206 Partial content
>         - 304 Not Modified
>      which seem safe to cache.

Those rules are already given in the sections on status codes in the
HTTP/1.1 spec -- they can be summarized in the caching section.

The Age: header field and algorithms look fine, with the exception that
the max-age cache-control directive is being ignored.  We decided on the
main list last summer that the Expires header field would be phased-out
in favor of a relative expiration time identified by the max-age directive.
The reasoning for that decision still applies with this model.

>2.10 Explicit Cache-control
>   It is useful to provide an explicit mechanism to control certain
>   aspects of the caching mechanism.
>   The HTTP/1.1 draft includes a new Cache-Control: header, which can
>   carry four kinds of directive: ``cachable'', ``max-age'',
>   ``private'', and ``no-cache''.  This can be sent in either direction
>   (inbound or outbound), although the ``cachable'' and ``private''
>   directives are not allowed in requests.
>   We replace the proposed ``cachable'' directive with the more explicit
>   ``public'' directive, used on responses to indicate that a response
>   to a request which included an Authorization header field may be
>   returned from a cache entry.  This overrides restrictions stated
>   elsewhere in the specification on cache behavior for such requests,
>   allowing a server that uses authentication for purposes other than
>   limiting access to encourage caching.
>      XXX Action item: is ``public'' allowed on responses to any
>      method, or do we need to restrict it?

It is allowed on ANY response to ANY method -- the Authorization case
stated above is just one case in which the origin server may wish to override
the protocol's default for that response and/or method.

>   The ``no-cache'' directive on requests is also useful, because it
>   allows a user agent (or other client) to override a cache's belief
>   that a cached object is fresh.  It is likely that some servers will
>   assign expiration times that turn out to be optimistic, or that by
>   some accident an incorrect cache-validator may be stored with the
>   cached object, and clients need a way to force reloading.  However,
>   it seems unnecessarily economical to use the same directive for two
>   purposes, and might be less confusing to use the name ``reload''
>   here.

No it wouldn't, since those semantics are not those of a reload.  They
are the semantics of Pragma: no-cache, and it would be confusing to change
it now.

>   It also seems useful to provide a ``revalidate'' directive for
>   requests, to force a cache to do a conditional GET on a cached
>   object, but not necessarily to reload it.  This would be useful if
>   the expiration time is wrong, but the cache-validator is right.  An
>   over-optimistic expiration seems far more likely to happen than an
>   incorrect cache-validator.

No, that is simply a "no-cache" combined with an IMS or IF[-Valid] --
there is no need for an additional directive.

>   The ``private'' directive, which is like a ``no-cache'' response
>   except that it only applies to shared caches, is clearly useful.
>   The original proposal for ``no-cache'' and ``private'' allows them to
>   specify particular fields of the response, but it is not clear that
>   this is necessary or useful.  It means that a subsequent request to
>   the cache cannot use the entire cached entity.  Does this mean that
>   the cache should return only a partial entity (which seems relatively
>   useless)?

What is useless about it?  The requirement is for a cachable entity which
includes private information within particular header fields for the
purpose of maintaining state between non-cachable actions (e.g., Cookie).
Although that information is private, the rest of the entity is public
and cachable.  The directive allows these semantics to be described in
HTTP without pre-knowledge of the names of the header fields, and thus
provides for extensibility in ways that are not possible with HTTP/1.0.

>   The ``max-age'' directive on responses effectively duplicates the
>   expiration time, and (as currently specified) does not provide as
>   much protection against timing delay, so it is unnecessary.

That contradicts our discussion last summer in which max-age was determined
to be necessary due to the burden of calculating short expiration times
using the normal HTTP-date format.  I say it is still necessary.

>   The original purpose of the ``max-age'' directive on requests is to
>   tell a cache to revalidate the value if it has been in the cache more
>   than a specified number of seconds.  This is basically a way for the
>   client, rather than the server, to change the looseness bounds on a
>   cached entity.

It is the only way for a user to state their freshness requirements.

>   It may be a better idea to use ``max-age'' to tell a cache to
>   revalidate a response that has been cached in one or more caches for
>   more than the specified number of seconds.  We can use the same
>   age-computation algorithm here as is used with expiration values (see
>   section 2.9).  This would make it explicit that ``max-age'' refers to
>   the time since an entry was refreshed, not since it was first loaded.

That is already explicit in the definition of max-age -- it defines the
expiration time relative to the last firsthand update.

>   In addition to ``max-age'', we can use two new directives for
>   requests to allow the client to set looseness bounds.  The
>   ``fresh-min'' directive specifies the minimum number of seconds of
>   remaining freshness necessary.  For example, ``fresh-min 10'' means
>   that the client only wants to see a cached response if it has at
>   least 10 seconds of freshness left (measured by the cache).  This
>   allows the client to increase (make more strict) the looseness
>   bounds.

I do not see any use for that feature and it would unnecessarily
complicate the user agent configuration -- just imagine trying to explain
it to someone not in this group.

>   Similarly, the ``stale-max'' directive specifies a maximum number of
>   seconds past the expiration time that the client is willing to
>   accept.  For example, ``stale-max 5'' means that the client is
>   willing to accept a cached response that has already been stale for 5
>   seconds.  This allows the client to decrease (make less strict) the
>   looseness bounds.  If both fresh-min and stale-max are given in a
>   request, the stale-max is ignored.

I find it unlikely that the stale-max semantics would be useful -- the
concept of freshness on a request is normally measured relative to
the user's needs (a la max-age) and not a function relative to the
expiration date.  The reason is because the expiration times may differ
significantly between resources, and no fixed displacement can take
that into account.  What you really want to do is change each variable
of the expiration heuristic independently, but that is not possible via
the protocol.  Therefore, the user just sets a max-age which matches
their own needs for freshness.

>   It may be appropriate, in some circumstances, for a user to specify
>   that the response to a request should not be cached.  (This might be
>   because the user believes that an intervening cache may be insecure,
>   but has no choice but to use the cache.)  We include the
>   ``dont-cache-response'' directive for this purpose; caches SHOULD
>   obey it but the user MUST never rely on this.  We do not specify what
>   should happen if the cache already contains the response.

No, that is not appropriate.  The server that handles a response controls
whether or not that response is cachable -- the user only needs (and 
already has) control over their private cache.  If the cache is insecure,
then the only secure way to make the request is to not make it via the cache.

>   Warnings are assigned numbers from the same space as Status-codes,
>   and (because they are ``Informational'') are of the form ``1xx''.
>      XXX several people have suggested deleting ``from the same
>      space as Status-codes'' from this statement.

Yes, they should not be from the same code space.  In fact, they should
only be two digits in order to further separate the two semantics.
You can then start numbering from 01.

>   198 Caching may violate law
>                   SHOULD be sent with any response if the server is
>                   aware of any law that specifically prohibits the
>                   caching (including storage) of the response.  This
>                   may include copyright laws, confidential-records
>                   laws, etc.  Such responses SHOULD also be marked with
>                   an expiration value in the past, to prevent an
>                   unwitting cache from returning the value in a cached
>                   response.

UGH -- you cannot add that to the protocol.  The software cannot be
required to know applicable laws -- it just opens people to negligence
lawsuits.  The miscellaneous category is more than sufficient for what
you are trying to accomplish here.

>2.12.3 Compatibility between HTTP/1.1 and older proxies
>   Since a proxy is always acting as either a client or a server, the
>   rules for compatibility between clients and servers should suffice to
>   provide compatibility between caches and servers (cache acting as
>   client), and between caches and clients (cache acting as server).
>   XXX is this true?

I don't know, but it sure is confusing.  I don't think you need to say
anything about it.

>2.13 Update consistency
>   For example, suppose that an entity is cached by two different
>   caching proxies A and B, and client X of proxy A performs a DELETE
>   method via that proxy.  Proxy A will remove the entry from its cache,
>   but proxy B will know nothing about the interaction, and so will
>   continue to cache the deleted entity.  This means that a different
>   client Y, using proxy B, will have a view that is inconsistent with
>   client X.

Which is the same case that occurs with a single proxy if the update was 
made locally on the server.  In other words, there is no need to address
this problem in any way that is different from the normal cache validation

>   Regardless of the wording here, any requirements that caches must
>   delete or invalidate certain entries when doing certain operations
>   are at best ``prudent,'' and at worst ``ineffective.''  There is no
>   way for an origin server to force other caches to remove or
>   invalidate entries; in fact, there is no way for an origin server
>   even to discover the set of caches that might be holding an entry.
>   We must recognize this, and look for other solutions to the update
>   consistency problem.

The only reason it exists is because it is "prudent".  Arguing that it
is ineffective is a total waste of time.

>   One simple solution would be for origin servers to mark potentially
>   updatable resources as uncachable (using a past expiration value, or
>   Cache-control: no-cache).  If this is done conservatively, it solves
>   the update consistency problem, but it also eliminates the
>   performance benefits of caching for client requests between updates.

In other words, not a solution.  Can we avoid exploring non-solutions?
It would certainly make the discussion shorter.

>   Another solution would be for the server to refuse to perform update
>   methods (such as PUT or DELETE) as long as any existing cache entries
>   might be fresh.  This requires the server to record the latest
>   expiration value it has issued for a resource, and is similar to the
>   Leases mechanism of Gray and Cheriton [2].  This would solve the
>   update consistency problem without preventing caching, but it could
>   lead to confused users, whose update operations would fail at
>   unpredictable times.  (This includes local users or administrators of
>   the server, as well as HTTP clients.)  One might take the approach
>   that updates that conflict with existing expiration leases are
>   accepted but queued for later application; this avoids unpredictable
>   failures, but leads to mysterious effects.

In other words, another non-solution.

>   Another approach, inspired by some work done at CMU XXX find
>   appropriate reference XXX, uses the concept of a ``volume''
>   containing a number of individual resources, and allows the server to
>   invalidate an entire volume at once.  A cache finds out about a
>   volume invalidation whenever it contacts the origin server regarding
>   any member of the volume; this improves the chances that a cache will
>   notice that a resource has been updated before it provides an
>   inconsistent cache response to a client.  However, it cannot
>   eliminate inconsistency.  We give a specific proposal for volume
>   validation in section 2.14.

The problem is not significant enough to justify such a resource-intensive
solution.  A better solution is just include the Warning in the UI and
let the users perform a Reload if they think they received an invalid

Even if Volume validation is a good idea, it has never been tried and
doesn't belong anywhere near the proposed standard for HTTP/1.1.  Since
it is optional, it can just as easily (and more appropriately) be
introduced as a separate draft and later promoted as Experimental.

>2.15 Side effects of GET and HEAD
>   Section 14.2 (``Safe Methods'') of the draft HTTP/1.1
>   specification [1] implies that GET and HEAD methods should not have
>   any side effects that would prevent caching the results of these
>   methods, unless the origin server explicitly prohibits caching.  We
>   make this explicit:  unless the server explicitly disables caching,
>   GET and HEAD methods SHOULD NOT have side effects.

No, that is not the appropriate spin on what the spec says -- the semantics
on "no side-effects" are to define the user's intentions, not the behavior
of caching.

For caching, side effects are irrelevant.  What is important (and what can
be said) is that the server should include

    Cache-Control: no-cache

on any resource for which a GET response is not cachable, and

    Cache-Control: private

on any resource for which a GET response is only cachable by the user agent.
There is no reason to place any other requirement on the GET method.

>   Apparently, some applications have used GETs and HEADs with query
>   URIs (those containing a ``?'' in the rel_path part) to perform
>   operations with significant side effects.  Therefore, caches MUST NOT
>   treat responses to such URIs as fresh unless the server provides an
>   explicit expiration time.

OR the server provides a Cache-Control directive which overrides this

>   This specifically means that responses from HTTP/1.0 servers for such
>   URIs should not be taken from a cache, since an HTTP/1.0 server
>   provides no way to indicate that two query responses issued during
>   the same second are in fact different.

That is a reasonable heuristic for any server that does not provide an
explicit cache-control.

>3.1 Cache-validation
>   Validators are strings of printing ASCII characters, not including
>   white space.

Why not?  If you want it to be opaque, wrap it in double-quotes.

>      XXX do we need to specify a maximum length for these strings,
>      or does the HTTP/1.1 spec already do this for us?

HTTP does not specify any maximum lengths other than within each specific
field definition.

>   Servers return cache-validator values using
>       Cache-validator: opaque-value
>   Clients use
>       If-valid: opaque-value
>   for conditional retrievals.
>   Clients use
>       Modify-if-valid: opaque-value
>   for conditional updates.

Why?  What do you gain by making these separate?  Moreover, demonstrate
to me that defining 5 (I think -- lost count in all the proposals) new
header fields for preconditions is necessary when a single precondition
syntax is more extensible, easier to implement, and easier to define.

>3.2 Expiration values
>   Expiration values are absolute dates in HTTP-date format.  Although
>   previous HTTP specifications have allowed several formats, including
>   one with only two digits to represent the date, we need to be able to
>   unambiguously represent expiration times in the future.  Therefore,
>   origin servers MUST NOT use the RFC-850 form, and SHOULD use the
>   RFC-822 form.

That is already required for HTTP/1.1.

>   An HTTP/1.1 server MUST always send a Date: header if it sends an
>   Expires: header.  The two headers should use the same date form.

An HTTP/1.1 server MUST always send a Date: header, period.

>3.3 Age values
>   Caches transmit age values using:
>       Age: non-negative-integer 1#[, non-negative-integer]

Use "delta-seconds" -- it is already defined for HTTP.

>3.4 Cache-control
>   The Cache-control: header can include one or more of the following
>   directives on requests:
>       stale-max <integer>

No, follow the current BNF grammar for cache-control.  That means

        stale-max "=" delta-seconds

if stale-max is to be defined at all.

>3.5 Warnings
>   Warning headers are sent with responses using:
>       Warning: SP <Status-Code> SP <Reason-Phrase>

        "Warning" ":" 2DIGIT SP *TEXT

 ...Roy T. Fielding
    Department of Information & Computer Science    (fielding@ics.uci.edu)
    University of California, Irvine, CA 92717-3425    fax:+1(714)824-4056
Received on Friday, 2 February 1996 12:01:13 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 19:55:57 UTC