W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2011

Re: I-D Action: draft-nottingham-linked-cache-inv-00.txt

From: Mark Nottingham <mnot@mnot.net>
Date: Wed, 1 Jun 2011 08:05:14 +1000
Cc: Brian Pane <brianp@brianp.net>, httpbis Group <ietf-http-wg@w3.org>, Balachander Krishnamurthy <bala@research.att.com>, cew@cs.wpi.edu
Message-Id: <A4FFD300-095F-41F7-A8C6-922331364487@mnot.net>
To: "Poul-Henning Kamp" <phk@phk.freebsd.dk>
Hi PHK,

I have running code and a few years of deployment experience, and things aren't quite so dire.

See:
  https://github.com/mnot/squid-lci

I keep the associations in memory (hashed in some cases to preserve space), and that seems to work well.

It is true that the invalidations aren't atomic, but in practice the speed of application is good enough.

It's true that a large proxy cache probably won't want to spend the memory on this scheme, but that's OK.

Cheers,


On 01/06/2011, at 3:53 AM, Poul-Henning Kamp wrote:

> 
> I am sure this will have pretty bad performance and I am not
> convinced it will actually work as expected.
> 
> 
> Whenever an object with "inv-by" is received, the cache is forced
> to do a lookup of the target of that "inv-by" in order to establish
> the necessary linkage (= a storage cost) to effect the dependent
> invalidation.
> 
> This lookup+linkage must be done for each and every object with an
> "inv-by" property, no matter how frequent or infrequent invalidations
> are.
> 
> (The only other way to implement this is to do a brute force scan
> of the cache on each invalidation.)
> 
> Turning the linkage around, by setting "invalidates" on the nexus
> object, is not as horrible, because it allows lazy evaluation without
> storage overhead, which again means that the cost will only be borne
> on actual invalidations.
> 
> 
> In either case, a massively parallel multi-user cache, either has
> to suffer a debilitating amount of locking for each invalidation,
> or take a very relaxed view of the temporal relationship between
> the objects associated by the Link: header.
> 
> The next thing that strikes me, is that all implementations will
> have to contain some kind of loop-avoidance to avoid some really
> stupid behaviour (A invalidates B, B invalidates C, C invalidates A)
> 
> 
> 
> I do not have a ready proposal for a better way of doing this,  that
> does not involve either a mandatory "lazy ban" facility (like we
> have in Varnish), or a mandatory secondary index in the cache.
> 
> This makes me suspect that this is an attempt to solve the wrong
> problem the right place or the right problem the wrong place.
> 
> After all that negativity, let me point out constructively that
> prototyping the "inv-by" and "invalidates" properties in Varnish,
> can be used to make a sanity check (in the sense of "running code")
> to see if this concept would actually work in practice.
> 
> Poul-Henning
> 
> -- 
> Poul-Henning Kamp       | UNIX since Zilog Zeus 3.20
> phk@FreeBSD.ORG         | TCP/IP since RFC 956
> FreeBSD committer       | BSD since 4.3-tahoe    
> Never attribute to malice what can adequately be explained by incompetence.

--
Mark Nottingham   http://www.mnot.net/
Received on Tuesday, 31 May 2011 22:05:43 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 27 April 2012 06:51:41 GMT