- From: Henrik Nordström <henrik@henriknordstrom.net>
- Date: Thu, 16 Feb 2012 08:22:18 +0100
- To: Mark Nottingham <mnot@mnot.net>
- Cc: "ietf-http-wg@w3.org Group" <ietf-http-wg@w3.org>
tis 2012-02-07 klockan 15:03 +1100 skrev Mark Nottingham: > A few problems: > > 1. Since it specifies required cache behaviour, it really should be in p6 Yes. > 2. The second MAY seems to conflict with the MUST Not really but should perhaps read "cached", not "cache". It's a mutually exclusive condition. Remember that MUST have priority over MAY, and with the two even in the same sentence there isn't much room for confusion. If the cached entry identity matches the HEAD response then it may be updated with information from HEAD just as in the case of 304 responses or 204 responses. If the cached entry identity do not match the HEAD response then the cached entry is stale even if it's cached expiry information says otherwise. > 3. Caches can store multiple representations for a resource, so there is no "current representation." Not really a problem. If there is no Vary then URI == Resource. On resources using Vary then representations of URIs should have identification means making them unique a) ETag b) Content-Location and on Vary not using * then there is a clear mapping of request->representation allowing HEAD responses to clearly indicate earlier responses that match this request as stale. > Part of the problem here really is that it's not "updating" any > response, but it is potentially invalidating an old one. What? HEAD responses that match the cached representation may update the cached representation with new Date etc. It's a form of cache validation if you like. > To resolve this, we could construct a requirement that refers to p6 > 2.7 ("Caching Negotiated Responses") to identify the correct response > to compare to and (potentially) invalidate. It's simply the last known response that matches the request. This response may be composed by aggregating many earlier responses 304 HEAD response merged 204 responses partial 200 response 200 response > However, I wonder if a) this is widely implemented, and b) the > complexity is worth it. a) Probably not. b) Probably worthwhile explaining the above response merge model of caches. > I.e., we could alternatively just remove everything after the first > sentence (i.e., treat the second MAY as primary, and therefore make > the whole thing redundant). Some common guidance on how/when cache entries are updated/merged/invalidated may help. The same properties & conditions applies in all cases. Both in intent (keeping cache up to date avoiding unneeded roundtrips) and issues (avoiding bad updates, servers not providing correct data in many cases) Regards Henrik
Received on Friday, 17 February 2012 01:38:08 UTC