- From: Simon Spero <ses@tipper.oit.unc.edu>
- Date: Mon, 14 Aug 1995 19:29:00 -0700 (PDT)
- To: Lou Montulli <montulli@mozilla.com>
- Cc: http-wg%cuckoo.hpl.hp.com@hplb.hpl.hp.com
Lou- If the problem is simply maintaining cache validity, then size may not be quite enough, yet MD5 may be too much. Size will catch quite a few problems, such as truncation or extension, together with most cases of completely incorrect contents. However, one commmon mode of failure involves files that are created with the correct size, but whose contents are in error. This can happen in several ways- NFS lossage is one; system failures when operating on memory mapped files are another (the usual M.O for operating on a mapped file is to grow the file to the correct size, then read in to the mapped buffer). To detect these more subtle corruptions we need to use a checksum algorithm. There are two main types of checksum- those used to protect against accidental corruption, and those designed to guard against deliberate modification. An example of the former is the IP checksum algorithm; examples of the latter are MD5 and SHA. Guarding against deliberate evil-doers is much harder than keeping tabs on random screwups; MD5 requires multiple passes and is probably overkill for this application. The best compromise would probably be a 32bit checksum or CRC, which can be generated quickly and easily. The checksum could even be included as part of a corresponding modification to Last-Modified. Simon
Received on Monday, 14 August 1995 19:29:09 UTC