W3C home > Mailing lists > Public > ietf-http-wg-old@w3.org > September to December 1995

Re: using opaque strings to determine uniqueness

From: Balint Nagy Endre <bne@bne.ind.eunet.hu>
Date: Wed, 15 Nov 1995 03:35:54 +0100 (MET)
To: Shel Kaphan <sjk@amazon.com>
Cc: mogul@pa.dec.com, brian@organic.com, http-wg%cuckoo.hpl.hp.com@hplb.hpl.hp.com
Message-Id: <366.bne@bne.ind.eunet.hu>
> Jeffrey Mogul writes:
>  > Hey folks, we're writing the spec, we should be able to require servers
>  > (and proxies) to play by the rules.
>  > 
>  > If you recall, I suggested that an object without an explicit Expires:
>  > header attached must always be validated by a proxy.  There are three
>  > cases:
>  > 	Expires: missing
>  > 		validation required on every fetch from any cache
>  > 	Expires: "never"
>  > 		validation never required (immutable documents)
>  > 	Expires: <some timestamp>
>  > 		validation not required until <timestamp>, but
>  > 		always required after that.
>  > 
Hmm. Expires: "never" currently should be interpreted as 'expired in past',
as "never" isn't a valid date. Is it not too late to change the spec defining
"never" as the +infinity date? Perhaps "cache-control: max-age=infinity" is
a better alternative than "Expires: never" to mean immutable document.
We have one open question:
what is the meaning, when Expires and cache-contol: max-age both present
in a response?
I proposed, that before the expiry date max-age has no effect, and after 
the expiry date max-age takes over.
(For me Expires: <now+1week> isn't the same as cache-control: max-age=<1week>.
Starting at the second week, the first requires check on every request, while
the second requires weekly check if the document requested enough frequently.)
Shel Kaphan replies:
> But do you really want to just ignore the case where the server has made an
> incorrect estimate about the expires date and issues a new version of
> a document before that date?
In most cases humans will estimate the expires date, not servers. Using AI
servers can make estimations, but till now I not heared about servers featuring
this. Independently of the source, estimates allways have some errors.
One way of extending the protocol may be the inclusion of (estimates of)
those errors.
> The typical case is that, while a file may change at any time, we
> still want caches to cache it.  Yet we generally do not want users to
> receive out of date, or more to the point, "previous" versions of
> things to the versions they have already got.  These goals are
> somewhat in contradition.
HTTP caches are free in doing extra checks. (The protocol specifies when
caches MUST check the origin server - in fact the next-hop cache.)
Those extra checks are cache implementation issues too.
In contrary, assigning expiry dates for documents, not having Expires or
cache-control: max-age is a protocol violaton in some sense. 
(The alternative interpretation of no expires as expires: "don't care"
however legalises that.)
> Brian's example could happen, and even if it is "valid" for it to
> happen, it isn't "nice".  The point is that proxies can make *some*
> efforts to prevent such things from happening, with some additional
> bookkeeping, and with some (I claim not very much) less effective caching.
Having extra cache validators like checksums and digests in addition to
content-length and last-modified can help in verifying Location and URI
headers pointing to other servers by requesting a HEAD from the other server.
(Think of mirroring widely used in ftp world and sometimes (mostly trough ftp
mirroring) in www world.)
NOTE: Beth Frank's notes on opaque validators are serious when talking about
mirrored resources.

Andrew. (Endre Balint Nagy) <bne@bne.ind.eunet.hu>
Received on Tuesday, 14 November 1995 18:50:26 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 7 January 2015 14:40:15 UTC