W3C home > Mailing lists > Public > http-caching-historical@w3.org > February 1996

Report on HTTP Caching subgroup meeting (Feb 2 1996)

From: Jeffrey Mogul <mogul@pa.dec.com>
Date: Thu, 08 Feb 96 16:57:14 PST
Message-Id: <9602090057.AA03168@acetes.pa.dec.com>
To: http-caching@pa.dec.com
I'm sorry this has taken almost a week to send out.  In part, this
is because I have been waiting for several meeting attendees to
get back to me to help fill in a few details, and some of them
have obviously been occupied with more important problems :-).

There are a few such things still marked "MYSTERY ITEM"; sorry
about that.


This report is based on minutes taken by K Claffy, a few notes
that I took during the meeting, my somewhat hazy recollection
of what happened, and contributions from a few of the other
attendees.  This is NOT intended to be meaningful to someone
who has not been paying attention to the discussions on the
http-caching mailing list; i.e., it is not meant to be a self-contained
summary of our understanding of caching.

Also, I have not attempted to preserve the order of discussion
of topics.

Attending the meeting:
K Claffy                                San Diego Supercomputer Center
Daniel DuBois [by phone, for 1 hour]	Spyglass
Jim Gettys                              Digital & W3C
Shel Kaphan                             Amazon
Paul Leach				Microsoft
Ari Luotonen                            Netscape
Larry Masinter                          Xerox PARC
Jeff Mogul                              Digital WRL
Dave Morris                             consultant
Charles Neerdaels                       Netscape
Henry Sanders				Microsoft
Lixia Zhang                             Xerox PARC & UCLA

Issue: output of the subgroup group

We all agree that the ultimate output of the caching subgroup would
be a set of changes to the HTTP/1.1 draft specification.  That is,
we will not be issuing a separate specification document.  However,
we also agreed that in the interests of making rapid forward progress
and in capturing some of the motivation and rationale of our design(s),
that it would be useful to produce an Internet-draft document in
the interim.  Unless someone decides to turn this into an "informational"
design-rationale document, it (like all I-Ds) will sooner or later
cease to exist.

There is a cutoff date around Feb. 15 for all I-Ds to be issued prior
to the March IETF meeting.  Jeff Mogul will try to produce something
by this date that can be submitted to the I-D editors as a "tentative
consensus" of the caching subgroup.

Issue: transparency vs. performance

Since there have been numerous discussions of whether semantic
transparency or performance is the more important issue for HTTP
caching, we tried to come to a consensus on what we believed about

Here is a rough summary of our consensus:

	Applications in which HTTP is used span a wide space
	of interaction styles.  For some of those applications,
	the origin server needs to impose strict controls on
	when and where values are cached, or else the application
	simply fails to work properly.  We referred to these
	as the "corner cases".  In (perhaps) most other cases,
	on the other hand, caching does not interfere with the
	application semantics.  We call this the "common case".
	Caching in HTTP should provide the best possible
	performance in the common case, but the HTTP protocol MUST
	entirely support the semantics of the corner cases, and in
	particular an origin server MUST be able to defeat caching
	in such a way that any attempt to override this decision
	cannot be made without an explicit understanding that in
	doing so the proxy or client is going to suffer from
	incorrect behavior.  In other words, if the origin server
	says "do not cache" and you decide to cache anyway, you
	have to do the equivalent of signing a waiver form.

	We explicitly reject an approach in which the protocol
	is designed to maximize performance for the common case
	by making the corner cases fail to work correctly.

We also discussed (in this context) the distinction between history
buffers and caches, which has been discussed before on the mailing
list.  Jeff's draft of January 22 contains definitions for history
buffers, but does not completely describe what the intended behavior
should be.  We agreed that although history buffers were not a part
of the HTTP protocol in a narrow sense, that it was important that
service authors and browser implementors have a shared understanding
of what history buffers (e.g., the "Back" button) do, and that this
should appear in the HTTP protocol spec.  Something along the lines
of "if your brower's history buffer fails to follow this understanding,
then these kinds of services will break: ..."

ACTION ITEM: Shel Kaphan will write some paragraphs for the spec
that define this shared understanding.

Shel adds:
    At the meeting we discussed whether we believed it would be
    reasonable or advisable to think about adding protocol elements to
    explicitly control history mechanisms, such as (for instance) a
    directive to prevent the entry of a document into a history
    buffer.  Surprisingly (as I had thought the consensus was
    otherwise) people seemed to agree that it was reasonable to
    consider some options in this area. We didn't discuss it further.

DEFERRED ITEM: do we need HTTP protocol elements to control history

Issue: Interpretation of Expires:

We've had some discussion about what Expires: really means.  In
the current protocol, we can express several concepts (pending
the resolution of certain encoding issues):

	a.	This response is "expired at birth"; you MUST NOT
		return it from a cached copy.  (i.e., Expires: yesterday)

	b.	This response never expires; you can always return
		it from a cached copy.  (i.e., Expires: never)

	c.	This response expires at a specific finite time
		in the future.  (e.g., Expires: 1 January 1997)

The issue arises from the interpretation of (c): if the origin
server says "Expires: 1 January 1997" does this mean "you are
absolutely not allowed to return this response from your cache
after 1 January 1997 without first validating it", or does it
mean "sometime around 1 January 1997 it would be a good idea
for you to consider validating this response with me"?  People
came up with scenarios where either interpretation could be useful,
which lead to the proposal that we ought to have an explicit
encoding in the protocol for which is meant.

My understanding of the consensus was that Expires: with a specific
date will continue to mean "absolutely expires on that date", and
that we may (or may not) introduce a new header (or Cache-control
directive) that allows the origin server to give a more probabilistic
interpretation.  In the most elaborate proposal, the origin server
could transmit an arbitrary probability distribution, but I think
we agreed that something involving a mean and perhaps a maximum
cutoff seemed more reasonable.

ACTION ITEM: Shel Kaphan was going to think about this
some more, and to write something about it.

We discussed the problem that most servers do not send Expires:
headers today, and in particular that there is a counter-incentive
because some browsers do the wrong thing if the server sends
Expires:.  According to Shel,
    The problem is that there is an unpleasant interaction with history
    buffers in some browsers.  If you send Expires: now in a document,
    hoping that no further *new requests* for that document will result
    in it being delivered from a cache, some (many? most?) browsers
    will also interpret that to mean that if you revisit this document
    by using a history navigation command such as the BACK button, that
    the document must either be reloaded or the browser must give some
    kind of warning message such as "DATA MISSING".  Both are bad, for
    different reasons.  The general reason this is a problem is that it
    can be confusing and distracting to users.
Presumably, this can be solved in a pure-HTTP/1.1 environment by
strict compliance with our recommendations about history buffers
(see above), but we may continue to have trouble with certain older 

We would like to see servers using explicit Expires: as much
as possible, because we believe that this could increase the
cache lifetimes for many resources (e.g., immutable ones) and
could reduce the necessity for cache administrators to guess
the proper heuristic tradeoffs.

DEFERRED ITEM: should Expires: be mandatory?

DEFERRED ITEM: When the Expires: header is not sent, should
the rules determining what assumptions a cache can make about
the expiration date depend on whether the origin server is
HTTP/1.0 or HTTP/1.1? 

We discussed the Age: header (more or less proposed by Koen
Holtman and somewhat modified by Jeff Mogul) and decided that
it was a good addition to the protocol.  However, because
some HTTP/1.0 caches may not forward multiple instances of
the same response header, such as
	Age: 3
	Age: 37
we agreed to make it mandatory that HTTP/1.1 caches coalesce
multiple Age values into one header ("Age: 40" in this case.)

Issue: Dates in If-Modified-Since headers

We had a brief discussion about what values a client should
put into an If-Modified-Since: header.  Apparently most (or
all) client implementations hand back the Last-Modified:
date that they received from the origin server, rather than
some other arbitrary date (such as the date that the client
actually received the response).  We seemed to be in agreement
that this was the right thing to do; that is, that it would
be a bad idea for a client to use anything but a Last-Modified:
date in an If-Modified-Since: header.

Apparently, at least one browser (Netscape?) parses and internalizes
the Last-Modified: date, then reconstitutes it in HTTP-Date
form when generating an If-Modified-Since: header.  This should
be harmless (unless a date encoding is used that drops the
century, in which case things could go wrong at the turn of
the millenium).

I can't remember if we discussed one additional related point,
or if this came up in another context and I'm confusing the
recollections: if the server receives an If-Modified-Since:
containing a date that is later than the actual modification
time of the resource, should it return 304 Not Modified or
should it treat this as a validation failure?  For example,
if the I-M-S header says "Tue Jan 30 00:59:57 1996" but the
resource's actual modification time is  Jan 29 00:00:00 1996,
should the server be paranoid and return the full resource,
or should it blithely assume that the client had a good reason
for constructing its own I-M-S date?

DEFERRED ITEM: should a server perform strict equality
comparisons on If-Modified-Since: dates?

Issue: Cache hierarchies and bypassing

At one point during the day, K Claffy brought up an issue that
none of the rest of us had even considered, but we agreed ought
to be taken seriously.  This is the problem of how a hierarchical
cache, such as is being used in New Zealand, can optimize retrieval
latencies for non-cachable resources.

In the New Zealand case, since they have a very limited-bandwidth
connection with the rest of the world, they use a national cache
to avoid overloading this international link.  They also use
additional caches scattered around the country, which normally
go first to this national cache but are able to bypass it to
go directly to an overseas origin server if the national cache
isn't expected to have the appropriate cache entry.

For example, if a client does a GET with a normal (non-"?") URL,
the request flows up the cache hierarchy because the responses
to GETs are normally stored in caches.  However, a POST is sent
directly to the origin server, because there is no point in
routing it through the cache hierarchy (there being no chance
in today's HTTP/1.0 world that the caches would be helpful here).

In order to do request-bypassing in the most efficient possible
way, the caches have to be able to determine from the request
whether the response is likely to be cachable.  (I would assume
that it is important to err on the side of assuming cachability,
since the converse could seriously reduce the effectiveness of
the caches.)

We didn't come up with a good solution to this problem in general
(i.e., for GETs whose responses are not cachable, or for other
methods whose responses *are* cachable) but there was some
brief discussion of the proposed "POST with no side effects"

DEFERRED ITEM: what to do about bypassing?

Issue: Extensibility

Jeff raised the issue of whether and how we could provide some
mechanisms in HTTP/1.1 that would allow HTTP/1.1 caches to do
something sensible with HTTP/1.2 (and later) protocols if new
methods were introduced after HTTP/1.1.  In other words, even
though we do not now know what those new methods would be, can
we figure out how to describe what they mean with respect to
caches so that we can separate this from the rest of the semantics
of the new methods?

This quickly led into a lengthy philosophical discussion of
caching models, led by Larry Masinter.  I'll try to summarize
how far we got, although we did not reach any closure on this.

Larry described possible three ways to view an HTTP cache:

	a) a cache stores values and performs operations on these
	values based on the requests and responses it sees.  For
	the purposes of the cache, one can describe each HTTP
	method as a transformation on the values of one or more
	b) a cache stores responses, period.
	c) a cache stores the responses to specific requests.
	The cache must be cognizant of the potential interactions
	between various requests; for example, a PUT on a resource
	should somehow invalidate the cached result of a previous
	GET on the same resources, but a POST on that resource
	might not invalidate the result of the GET.

Nobody wanted to defend view (b); it was clearly insufficient.

Larry prefers view (c), mostly (if I recall right) because it
seems to fit best with a simple model of content negotiation.

Jeff favors view (a), because it ultimately seems (to me) to
allow a more straightforward description of what each method
means to the cache.  In particularly, view (c) seems to require
describing O(N^2) interactions between the N methods.

The fact that we could reach agreement on a lot of other issues
without having any kind of agreement on this particular debate
suggests that either one's choice between views (a) and (c) does
not have much effect on the solutions to those issues, or perhaps
that the "proper" view is some hybrid.

Getting back to extensibility, if we followed view (a), we could
perhaps describe the cache-related consequences of new (post-HTTP/1.1)
methods by some generic request and response headers that the caches
could obey without understanding the methods themselves.  For example,
these hypothetical headers could tell the cache to save (or not save)
the response value, or to make stale ("invalidate") any previous
cached values associated with the URL mentioned in the request, or
one or more URIs mentioned in the response headers.  It seems
somewhat trickier to do a similar thing for extensibility if one
follows view (c).
Paul Leach apparently agrees with Jeff Mogul on this.

K Claffy prefers method-based view because it seems to make
cut-through (bypassing) easier in a cache hierarchy.

Shel adds:
    We did mention that if requests on a URI invalidate the cached
    responses to other requests (different methods) on the same URI,
    (with possibly a couple of exceptions), then the N^2 problem goes
    away -- i.e. you assume a simpler model where the cache doesn't
    pretend to know the semantics of the methods.  The "short cuts" we
    discussed were that GET and HEAD don't have to invalidate any other
    entries, and that the body of a PUT request might be used to
    service later GET requests.
but Jeff notes that assumes that a new method does not affect the
cachability of a resource other than those mentioned in the request
URL, any request URIs, any response URIs, any response Location:
headers, etc.  I.e., we would have to be careful about any new
headers that could identify "cache consequences" of a new method.

We also discussed the possibility of a denial-of-service attack
(or at least "denial of cache performance") if the protocol
were to include mechanisms that allow one client or server to
cause the invalidation of many cache entries.

DEFERRED ITEM: what to do about extensibility?

Issue: PUTs and POSTs

There was some discussion about caching the results of POSTs,
and/or the bodies of PUTs, as examples of how the current
GET-only caching model could be extended.  That is, we discussed
these as stand-ins for hypothetical future methods while discussing
the general problem of extensibility.  We did not have time to
fully discuss caching for PUTs and POSTs.

DEFERRED ITEM: caching of responses to POSTs

DEFERRED ITEM: caching and PUTs

Shel added this point:
    We have to be careful to distinguish between conditional execution
    of a method, and conditional return of the response.  In the case
    of GET, since it nominally has no side effects, conditional
    execution of the method is not so important.  But if we start
    applying conditionality to POST, PUT, etc., it is *critical* to be
    absolutely clear about what aspect of the action and response is

Issue: byte ranges

We discussed the byte-range proposal in
(the URL listed in the HTTP-WG home page, which now results in "Forbidden"!)
and Jeff's somewhat less formal proposal in:

Jeff pointed out that Ari Luotonen's proposal depended on a new
"Unless-modified-since:" header that doesn't integrate well with
opaque validators, and that the proposal left out a form of
conditional retrieval that might be useful.  Ari agreed, and since
Netscape (nor any other vendor?) has not deployed anything that
depends on Ari's proposal, Ari and Jeff will work together to integrate
the two proposals into one.

ACTION ITEM: Ari Luotonen and Jeff Mogul to revise byte-range

Issue: hit metering

Server vendors are getting quite a lot of customer pressure to
provide hit-counts that include what happens at caches.  In
the absence of a mechanism to do this, some servers are disabling
caching simply to obtain hit counts, and we agreed that this is not

Most (but not all) of us agreed that it would not be possible to
make hit-metering mandatory for caches, simply because there is
no obvious way to enforce an anti-cheating policy.

Customers have asked for several kinds of hit-metering, with
increasing levels of complexity and cost:

    (1) simple hit counts: the cache tells the server how many
    hits have been made on a particular cached value.
    (2) access-log transfers: the cache sends to the server
    complete access logs (in some standardized format) so that
    the server can tell which clients have made what requests
    and when.
    (3) client navigation traces: the cache sends to the server
    enough information to determine how the user got from one
    page to another.

It's not entirely clear whether (3) is really different from (2),
or whether the access logs contain enough information to satisfy
the people who want (3).  None of us were thrilled about either
(2) or (3), but there seems to be some customer pressure and so
we felt obligated to consider it.

Netscape has internal specifications for (1) and (2?), and will
write them up and circulate them to the subgroup.

ACTION ITEM: Ari Luotonen and Chuck Neerdaels will provide
a proposed specification for hit-counting (and perhaps
for access-log transfers).

Jim Gettys has some contacts with the demographics people who
were keen on wringing more access-measurement information out of
the web, and will find out more specifically what they think
they want.

ACTION ITEM: Jim Gettys will grill the demographers.

Issue: Content negotiation

[Neither of the note-takers fully understands the issues here,
and I was out of the room for a lot of this discussion doing
other tasks, so this is a bit sketchy.  I would appreciate some
editing of this section by anyone who thinks they understand
better what happened!]

Dan DuBois joined us on the speakerphone for this discussion.

Larry Masinter led most of the discussion, and started out by
explaining his view of the current status of the content
negotiation subgroup.

For the purposes of caching, the two key points seem to be:

   (1) How does the cache decide if can return a response from
   its cache or if it must contact the origin server?
   (2) How does the origin server know if the cache already
   holds the response that it should return (and hence we
   could avoid actually transferring the body of the response)?

Question (1) is the topic that has seen the most discussion so
far.  We seem to agree that there are at least these three cases
to handle:

    (a) there is exactly one variant of the resource.  Therefore,
    if there is a cache entry and it is fresh, the cache can use
    it without checking with the server.

    (b) there is a fixed and small set of variants, and the cache
    knows the algorithm that the server uses to choose the variant
    to send in the response.  Therefore, the cache can decide whether
    it has the right response already, or if it must obtain it from
    the server.
    (c) there is a fixed and small set of variants, but the cache
    does not know the algorithm to pick the right variant.  This
    means that unless the client's request headers "suitably match"
    [see below for what this means] the effective cache key for
    an existing cache entry, the cache must contact the origin

KC's notes include a fourth case, "large # alternatives" but I
can't remember or figure out if this is significantly different
from cases (b) or (c).  That is, I would guess that operationally
this is essentially the same as case (c) once "large" gets large

We have two proposed mechanisms for dealing with this that appear
to be more or less complementary.  There is the "Vary:" response
header, which the server uses to inform the cache that the response
depends on zero or more of the request header fields (besides the
URL, of course).  For example, "Vary: {const}" means that the
response does not depend on anything, and so the cache knows it
is in case (a) [only one variant].

Even for case (c), I believe it is safe to assume that if all
of the request fields match those stored with a cached value,
then the cache can return that value to the client (if the
Expires: value permits this).  The Vary: header allows us to
loosen this restriction somewhat: the "suitable match" between
the new request headers and the previous headers is determined
by the Vary: value, so if (for example) the response does not
depend on the User-agent: header, the cache can ignore that in
deciding what to do.

If we are in case (b), the cache knows (from Vary:) what headers
matter (for example, Accept-language:) but it still needs to know
the universe of available variants (languages, in this example)
before it can decide whether it already has the right response.
So this means that the server needs a way to communicate this
set (and perhaps other discrimination values) to the cache, which
is apparently what the URI: response header is for.

At this point, however, I remain totally confused.


Back several paragraphs, I mentioned a second key point:

   (2) How does the origin server know if the cache already
   holds the response that it should return (and hence we
   could avoid actually transferring the body of the response)?

We talked about a number of possible solutions for this.  For example,
a cache could annotate its request to the origin server with all of the
information it has about the variants it already has in its cache,
and let the origin server decide.  But that seems to require huge
amounts of request headers on all requests, which somewhat defeats
the purpose of occasionally not sending a response body.

Someone also proposed tagging variants with unique URIs.  However,
this might not work for some kinds of content negotiation.

After some discussion, Jeff proposed a "variant-ID" mechanism that
would provide a compact (and optional) way of communicating between
cache and server what variants the cache already held.  Someone
else suggested that this also needed to include cache validators
for each of the variants, so that the necessary response *would*
be returned if the cached copy was no longer valid.

It was agreed that this scheme had the advantage (over URI-tagging)
that there was little or no chance that the tags could leak out into
other contexts (i.e., nobody would try to use them instead of a
proper URL).

I described this in more detail today in

ACTION ITEM: Paul Leach willing to write up a proposed spec with
Jeff's help.  Larry Masinter willing(?) to integrate this with other
content-negotiation stuff.

Issue: Security

We started by listing five security-related issues:

   Spoofing using location headers
   Data integrity

We did not succeed in resolving all five of these.

We seem to believe that data integrity is an end-to-end issue;
caches should not be checking or computing MD5 (or other)
integrity checks, or changing any aspect of the requests
or responses that would be covered by such checks.

Shel led a short discussion of the Location: header problem,
which mostly boiled down to a plea not to do anything stupid
in the protocol.  Shel Kaphan has summarized this in

ACTION ITEM: Shel Kaphan to write necessary paragraphs for the
part of the HTTP/1.1 spec that covers Location:.

We have already discussed on the mailing list the issue of
when responses to autheticated requests can be returned to
other users.  The current draft spec includes a statement
that if the request includes Authentication: then the response
is not cachable in such a way that other users could see it.  We
have already agreed that if the response contains "Cache-control: public"
then this overrides that rule.

ACTION ITEM: Jeff Mogul will clarify the language regarding what
this means (in particular, what "shared" means).

MYSTERY ITEM: My notes say "Larry will write up Authenticate + Vary"
but I have no idea what I meant by that.  Larry?

MYSTERY ITEM: proxy-authentication.   I remember that this came
up, but can't remember what we did.  Dave?

Issue: state (a la dave kristol's cookies) and caching

This summary is from Shel:

    This has been discussed in the state management subgroup.

    Briefly, cookies can't be cached by public caches, but since public
    documents may make up part of a "stateful dialog", and in
    particular the first document in a stateful dialog may be (for
    example) a public and cachable home page, servers that wish to
    receive the client's cookie on each request, or to issue a new
    cookie on requests for a document must set the document up to
    require validation on each request.  (e.g., by having it be

    Otherwise, the cache control headers for responses control what a
    proxy has to do.  If a document is fresh in a cache, a request
    containing a cookie does not have to be forwarded to the origin
    server, since (by definition) if the document is servable from a
    cache, there aren't important side effects at the origin relating
    to requests for that document, and so, no changes to the cookie.

    One important issue bearing on caching is that for conditional
    requests that go through to the origin server, for which the origin
    server responds with 304 and also with a set-cookie header, caches
    must splice the set-cookie sent by the origin server into their own
    response.  This is, for example, how it can work to have a home
    page that is in a cache, but stale, so that the only traffic to the
    origin server is to validate the home page, receiving a 304 and
    potentially a new cookie.
Issue: opacity of validators

We had a little discussion of this on the mailing list before
the meeting, and somewhat more right after the meeting, but
almost none during the meeting.

DEFERRED ITEM: opacity of validators
Received on Friday, 9 February 1996 01:10:35 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 19:55:57 UTC