Re: HTTP Caching Model?

> Why is that such a bad thing? The server can give a lot of information
> about what's available by giving several URI: headers.

Hmmm, I don't necessarily support this -- this wastes bandwidth.
Besides, the set of available versions may vary, so the proxy can
never be sure if it has all the presentations/knowledge of them
without going to the original server.

That's why:

> It had better
> go to the original server and get the real answer, if it doesn't have
> enough information locally.

Yes!  If it doesn't find a perfect match locally it goes to the
remote.  It may be that the remote tells to use the local copy, which
is ok, and still saves a lot of bandwidth.

> >1) Have the proxy revert to the source if it receives a request which
> >   includes a variant category that has not already been cached;

Definitely.

> >2) Tell the proxy about all the possible variants so that it can make
> >   the decision itself;

No; the list can't be guaranteed to be exhaustive after some period of
time.  Rather than futher complicating things with life times for this
information, have the proxy connect to remote every time in these
cases to make a check.

> >3) Use URCs so that the decision can be made within the client and not
> >   as part of the request.

Mmmmm (the way Marge Simpson sez it).

> >I think we can live with (1) for HTTP/1.0,
> 
> Bingo. I agree.

Jackpot. Me too.

Cheers,
--
Ari Luotonen				http://home.mcom.com/people/ari/
Netscape Communications Corp.
650 Castro Street, Suite 500
Mountain View, CA 94041, USA

Received on Thursday, 15 December 1994 16:01:54 UTC