HTTP Caching Design

I was hoping that someone else would say this, since what I am about
to explain is fundamental to how the Web works and should be understood
by anyone attempting to implement Web applications.  Moreover, it should
already be known by anyone attempting to redesign HTTP.  

The Web is designed to be used by *and created by* people.  People are a
necessary part of the evolving system which is the Web.  The Web is
a type of system commonly referred to by the term "groupware".  There are
certain design and implementation factors which are known to cause
groupware systems (or projects creating such systems) to fail.  Although
not strictly WWW-related, the following paper:

    Grudin, Jonathan. "Groupware and Social Dynamics: Eight Challenges
       for Developers". Communications of the ACM, 37(1), January 1994.

is instructive and presents more background cases.  Web protocol
designers need to be aware of these factors and design with them in mind.
I know that TimBL is aware of them, and that I am, but for too long I have
also been assuming that the HTTP WG members are aware of them as well
[or, at the very least, would understand why I was making design decisions
based upon those factors].

The first (and, I think, foremost) design factor for groupware systems
is that disparity in work and benefit leads to failed systems.  In other
words, people who benefit from a feature should be the ones doing the
work necessary to enable it, while at the same time people who do not
benefit from a feature (or simply do not *perceive* a benefit) should not
be required to perform extra work in order to enable that feature for others.

Different people have different uses for the Web.  HTTP was designed
to be simple (presenting a low entry-barrier) and extensible in order
to encourage new applications to be built upon the Web while still
providing the underlying requirements for interoperability and
communication.  To the extent that HTTP is optimized, that optimization
is designed so that the most likely behavior is the default, and thus
no extra work is required to obtain the most likely behavior.

I designed the HTTP/1.1 draft 00 caching features according to the
above principles.  In particular:

   1) By default (i.e., given no indication otherwise), any 200, 300, 301,
      or 410 response to a GET or HEAD request is cachable.  The length
      of time between "freshness" checks on the origin server is determined
      by the cache according to its own set of heuristics/needs.

      Rationale: An overwhelming majority of resources available
      via HTTP are cachable, even when the content provider (the actual
      person responsible for making the resource available) is not aware
      of caching and/or has no control over the server's functionality.
      In contrast, non-cachable resources are rare and, when they do occur,
      the content provider of that non-cachable resource *does* have
      control over the server's functionality. 

   2) By default, no other response code or request method is cachable.

      Rationale: Almost all other request methods and error or
      special-purpose responses are not cachable by their very nature --
      either they represent once-only responses or they are not worth the
      risk and/or storage requirements to justify caching.

   3) By default, user requests are made with the assumption that a
      cached response is preferred over a response from the origin
      unless some indication otherwise is present in the user's request
      or the cached response.  Also, unless indicated otherwise, the
      cache manager is trusted to choose a "freshness" heuristic which
      optimizes cache performance (the balance between absolute correctness
      and absolute efficiency).

      Rationale: Caches are used for two reasons -- shortened response time
      and reduced network bandwidth usage.  In some cases, they are not just
      a feature -- the cache(s) enable the organization to afford and maintain
      appropriate usage of a limited or costly bandwidth to the Internet.
      Therefore, in the vast majority of cases, the user is using the cache
      because they want it to provide cached responses or because their
      common network connection (a shared resource) depends on the
      cachability of resources. 

      In both cases, the needs of the cache maintainer will always override
      the needs of the content provider, because only the maintainer has
      real control over the cache behavior.  The protocol may provide the
      ability to influence this behavior ONLY if the protocol actively
      (by design, as opposed to passively by specification) encourages 
      behavior which is good for the cache.  If the protocol discourages
      behavior which is good for the cache, then caches will disregard
      those aspects of the protocol and the ability to influence the
      caching behavior is lost.

   4) The user may override the default (or fine-tune the cache behavior)
      via the Cache-Control header field on requests.

      Rationale: A user may have an unusual need/purpose -- since HTTP is
      an enabling protocol, it should be capable of communicating such needs.
      The user "pays" for this feature by providing additional work in the
      form of the Cache-Control header field (and the user agent
      configuration that caused it to be generated).  If it isn't abused,
      caches will obey the Cache-Control header field because they will
      trust that the user has a special need or knows something special
      about the resource (e.g., that they just changed it out-of-band)
      which would not be reflected in the normal cache "freshness" algorithm.

   5) The content provider may override the default (or fine-tune the
      cache behavior) via the Cache-Control header field on responses.

      Rationale: Some content just isn't cachable.  Since the providers of
      cache-sensitive content are the primary beneficiaries of changes to
      the default cache behavior, it is reasonable to have them perform the
      extra work in the form of the Cache-Control header field (and the
      server configuration/script that caused it to be generated). 
      If it isn't abused, caches will obey the Cache-Control header field
      on responses because they will trust that the content provider knows
      something special about the resource (e.g., that it is changed every
      15 minutes, or is particular to each user, etc.) which would not be
      reflected in the normal cache "freshness" algorithm.

BTW, "cachable" in HTTP means that the response may be reused as the response
for an equivalent future request -- it does not mean just "may be stored".

Note that the current HTTP/1.1 design implicitly encourages caching.
It does so because HTTP caching is a necessity for the common good of
all users of the Web, whether or not they are aware of it.  Thus, any
content provider that "doesn't care" about the cachability of a resource
will be given the default behavior which is good for caching.  At the
same time, any content provider that "does care" about the cachability
of a resource is provided a mechanism to express their needs.  Since this
is also backward-compatible with HTTP/1.0 behavior, there is no need to
special-case the requirements for HTTP/1.0 servers.

In contrast, the design that Jeff Mogul has proposed in 
<http://ftp.digital.com/%7emogul/cachedraft.txt> implicitly *discourages*
caching.  In order to make a resource cachable, the provider must do 
extra work even when they "don't care".  In addition, all server developers
must create a new mechanism for allowing user's to specify the "freshness"
of each and every resource, even though the vast majority of such resources
don't have any implied notion of "freshness" and cannot conceive of the
actual needs of caches within the organizations of possible recipients.
Furthermore, because this represents a change from the HTTP/1.0 defaults,
the cache mechanism is required to employ separate behavior depending on
the server version.

The result will be either no-cache by default, or bogus "freshness"
criteria applied to every resource by default.  Both cases will result
in excessive prevention of reasonable caching.

My design experience, and the experience of others building groupware
systems, says that such a design will not work -- it will either break
the system or the system will be compelled to ignore it -- because the
system that is the Web depends as much (or more) upon the social factors
of Web use as it does upon the abstract mathematical notions of "correctness"
assigned to the caching algorithm.

Finally, at least one person has said that the current caching algorithm
is "broken".  However, I have yet to see any examples of brokenness
illustrated with the current HTTP/1.1 caching mechanisms.  I have also
yet to see an example of cache usage/requirements that has not already
been met by the HTTP/1.1 design.  Since the purpose of the caching subgroup
was to do that BEFORE redesigning the protocol, I am not at all happy
about the current direction of the subgroup.


 ...Roy T. Fielding
    Department of Information & Computer Science    (fielding@ics.uci.edu)
    University of California, Irvine, CA 92717-3425    fax:+1(714)824-4056
    http://www.ics.uci.edu/~fielding/

Received on Sunday, 7 January 1996 13:59:09 UTC