Re: multi-host virtual sites for HTTP 1.2

Another set of thoughts about the replication proposal and my
call for explicit semantics... an area where the details get
potentially sticky is conditional requests, and caching.

(Another areas is POSTs: but the site can avoid ambigities if the
would cause problems by not replicating, say, their cgi directories.)

My first reaction was that maybe the problems could be solved by
using only GET and HEAD across the replicated servers, but I
can still see some concerns.

The concerns are less acute when all the replicated servers
share a networked file system, like NFS, DFS, ASF, etc. but there
still might be annoying race conditions. It gets more goofy
when the replicated servers are mirrored, say, by a nightly
update via ftp and cron jobs.

If I do something like a conditional GET with an if-modified-since
on a particular server, the result depends on the state of that
server but maybe not on changes at another site that hasn't been
replicated.

There might be some way to use the HTTP/1.1 semantics along with
information about the replication schedule to return consistent
replies, but I don't yet understand them all well enough to comment.

Another approach is to say that this is all a meta-protocol
layered on top of caching and so forth. But this would loose
the potential bentifit that a smart cache might be able to
return any of the replicas.

Someone who understands the caching in HTTP/1.1 better than I
might be able to unravel this further. In any case, this is an
area where I'd suggest some explicit guidelines be attached to
the proposal.

Received on Thursday, 11 July 1996 07:14:39 UTC