Re: whenToUseGet-7 counter-proposal

It's not a good idea, IMHO, to cripple URIs that have query components 
because it's a convenient heuristic.

Most modern caches that I'm aware of will store representations of these 
URIs, and if I'm not mistaken, many crawlers do index them; they just 
won't submit forms.

Regarding your proposed language, if systems cannot rely on HTTP GET 
being safe, how will caching and crawling work at all?



On Tuesday, April 23, 2002, at 09:10  PM, Joshua Allen wrote:

>>    HTTP GET should be "safe" (because there are systems
>>    and operations that rely on it being so)
>
> I would suggest clarifying this by adding the sentence:
>
> "Systems should NOT however, rely on HTTP GET to be 'safe'"
>
> Advising otherwise would be misleading, particularly in the case or URLs
> with query strings.  There is a reason that most web crawlers, many
> other caches, etc. exclude URLs that have querystrings -- the fact is
> that most systems do NOT depend on *all* GETs to be "safe".
>
>>    (e.g., with query strings).
>
>
>
--
Mark Nottingham
http://www.mnot.net/

Received on Wednesday, 24 April 2002 00:42:46 UTC