- From: Shel Kaphan <sjk@amazon.com>
- Date: Wed, 30 Aug 1995 11:51:57 -0700
- To: Larry Masinter <masinter@parc.xerox.com>
- Cc: http-wg%cuckoo.hpl.hp.com@hplb.hpl.hp.com
Larry Masinter writes: > I think we're better off sticking with computer science terminology, > rather than reaching into mathematics when describing Internet > protocols. I'd suggest we say that GET should be > > "without additional side-effects if invoked again." > > That is, a 'GET' method might cause side effects, but reinvoking it > with the same URL shouldn't have any additional side effects. > Though I generally agree with what you're saying, there's a slight problem with this, I think. If the first GET on a URL has side effects necessary to the semantics intended by the server, then it has to avoid being served from a cache. But since caches are potentially public, and also since other methods can leave things in caches under the URL the GET will use (a POST with the same request-URI; anything that returns 2xx and a Location header) it seems a bit dangerous to build a system where a "first" GET (however that could be detected) was supposed to have side effects, but subsequent ones weren't. Instead, couldn't we say that if GET on a particular URL has side effects and produces a cacheable result, the side effects must be *unimportant* to the server, since by making the result cacheable it is giving up the right to "see" certain future GETs on that URI. > Note that this definition says nothing at all about what is returned > by the GET method, which may return a different result every single > time, no matter how closely spaced the calls are. The issue is whether > doing it again has an effect on the state of the server. > I completely agree. --Shel
Received on Wednesday, 30 August 1995 11:57:00 UTC