Re: Dates vs. Deltas [was: An alternative to explicit revocation? ]

    We should describe this in such a way that any cache which chooses to
    more accurately estimate latency would be allowed to do so.
    
I can't think of a way to do this that wouldn't involve making
dangerous assumptions.  Remember that a lost TCP packet or two
can add a tremendous amount of latency between the time that a
response is generated and when it is received, so it's not OK
to simply rely on NTP synchronization.

One could argue that that NTP-based synchronization combined
with absolute dates would allow finer-grained control over freshness,
I suppose.

    In the interest of stability, perhaps the minimum fresh interval (for
    cachable entities) should be something like twice the time between when
    the REQUEST is sent until the last byte of the RESPONSE is received.
    Or a weighted moving average of the same. I hypothesize that over time
    this will tend to improve caching which will tend to reduce network
    load which will tend to allow more rapidly updated data to have a
    shorter fresh interval.
    
I'm not sure what you mean by "minimum fresh interval".  Do you
mean the "minimum amount of time that a value could be treated as fresh"?
This seems to contradict the principle that the server should be
able to limit the duration of freshness; since the server cannot know
when the last byte of response is received, it cannot make this
computation.

-Jeff

Received on Thursday, 4 January 1996 01:42:27 UTC