- From: Daniel DuBois <dan@spyglass.com>
- Date: Tue, 13 Aug 1996 14:54:12 -0700
- To: Jeffrey Mogul <mogul@pa.dec.com>
- Cc: http-wg%cuckoo.hpl.hp.com@hplb.hpl.hp.com, swingard@spyglass.com
>If your server does not have a way to generate an ETag field that >meets the specification, then don't send one. I'm having a moral dilemma with regards to my recommendation to the server team about what we be a good means for generating an ETag: Warning: Everything that follows is an implementor's issue, and not technically a protocol specification issue. Initially, I felt a hash of {the full pathname (to distinguish variants), the size of the file, and the OS-reported last-modification time} would be sufficient to be a strong validator. But, technically, that doesn't guarantee that a file that has changed twice in the same second has a different ETag value, which the spec says is a must. So now I have a dilemma. On NT, I might have an easy solution, as supposedly FILETIMES are stored as 100-nanosecond intervals. On UNIX do I have to use a W/ weak validator? I'm strongly concerned that every time someone aborts a GIF download, and returns to the containing page, smart browsers of the future will be using If-Match with Range GET's, and the Spyglass Server will be left out in the cold with its wimpy weak validators. Certainly reading in the file and doing an MD5 or checksum of its contents each time we serve it is out of the question for performance reasons. I would think with our file-based web server, numerous changes per second aren't as much of an issue as a database back-end that was generating content dynamically. I'm thinking that said dynamic application was the focus and intent of the restrictions on the strong validators, so maybe I can slide by? Here's everyone's chance to express an opinion. ----- Daniel DuBois, Traveling Coderman http://www.spyglass.com/~ddubois/ A polar bear is a rectangular bear after a coordinate transform.
Received on Tuesday, 13 August 1996 15:17:11 UTC