Re: Another Cache-control: proposal

    It is true that this token system does not give the origin server
    the abilty to know actual hit counts though, correct?  I mean, just
    because a server received a subsequent request from Proxy A on a
    URI Proxy A had previously been given 10 tokens/uses for, doesn't
    mean that proxy has served 10 copies of that URI response.  Maybe
    it just served one, and then the 2nd request came after the
    max-age.

That is correct.  However, it provides somewhat more information
than the current system.

I.e., today if an origin server provides a cachable response to
a cache (i.e., one that has an expiration date in the future),
this could represent 1 hit, or 2 hits, or 10 hits, or 6060842 hits.

But if the origin server provides a "max-uses=10" limit (and if
the cache observes it!) then the server knows the true count
is somewhere between 1 and 10.  That is, this bounds the inaccuracy
of the server's knowledge.

One might imagine a dynamic algorithm that adjusts the max-uses
number provided on responses so that if the caches are always
making repeat requests (along with use-count=N with N > 0, implying
that these requests are for refilling the use-count, not because
of replacement misses) then the origin server can increase the
max-uses value (say, double it) for that resource.

But we can do better.  As I said in my initial description of this
proposal,
    Perhaps we could define a "cooperative cache" as one that does a
    HEAD on the resource (along with a "Cache-control: use-count"
    header) when it removes it from the cache, just to let the
    origin server know.
    
    So how does an origin server know that the cache is willing to
    obey max-uses?  Suppose that if the cache adds
	    Cache-control: use-count=0
    to its initial (non-conditional) GET, we interpret this to mean
    "I am willing to obey max-uses".  A server receiving this could
    expect (but not in a legally binding sense!) that the cache
    would comply.  (Alas, this does not quite work if there is
    an HTTP/1.0 cache between the HTTP/1.1 cache and the origin
    server, but perhaps the Forwarded: header solves this.)

This means that, for example, this kind of interchange would
provide accurate hit-counts:

	Cache					Server

	GET
	Cache-control: use-count=0

						200 OK
						Cache-control: max-uses=10

	    <answers 5 other requests>

	    <decides to replace entry>

	HEAD
	Cache-control: use-count=5
						200 OK

So here we have two HTTP transactions instead of six, and almost
no additional mechanism (i.e., no requirements for standardized
log formats, etc.)  And note that if the cache entry is never
reused, the additional HEAD request is unnecessary (because it
would report a value of zero).

-Jeff

Received on Tuesday, 2 April 1996 22:54:52 UTC