Re: don't use POST for big GET [was: Dictionaries in HTML ]

> If this is not well specified, interoperaability will suffer because it
> isn't sufficient to require servers to accept what they 'serve'.
> Any intermediate proxies must be expected to handle the data as well.

Nope.  The proxy/gateway is perfectly capable of refusing to serve
any request it deems to be unreasonable.  The user is left with the
choice of getting a "better" proxy, demanding better service from
someone else's proxy, or asking the content provider to be more
reasonable in the URLs provided.  None of these are protocol issues.

> And server software has little control over the size and shape of URL
> references in the documents they serve.

With few exceptions, content providers don't use URLs which result in
errors on their own server.

> Rarely is there no practical limit on the size of an object no matter how
> hard the implementor tries to avoid limits.

There is always some limit (e.g., available memory), but there is a
significant difference between limits in the protocol and limits in
the implementations of the protocol.  There is no limit in the HTTP protocol
on the length of URLs, the length of header fields, the length of an
entity body, or any other field marked *<whatever> or #<listitem>.
The protocol already says this in the BNF -- I suppose we could add it
to the text for redundancy.

If there is an interoperability problem with long URLs and existing
applications, then it is reasonable to note that problem.  The problem
that I know of is that some older browsers and very old servers assume
a maximum of 255, and thus do not interoperate well.  The reason that
this does not make for a protocol maximum is that people can easily
avoid those limits by replacing those applications.

> I would favor something to the effect of:
>   Servers and proxies MUST handle URLs at least 4096 bytes long
>   Servers and proxies SHOULD handle URLs at least 64K long.
>   Content providers SHOULD not expect correct handling of URLs greater
>   than 4096 in length. Content providers MUST not expect correct
>   handling of URLs greater than 64K long.

Those numbers are inventions -- they have no meaning for the protocol,
and are not representative of any application limits I know about.
We shouldn't invent limitations that don't exist.

What we should do is think about (for HTTP/1.1) what needs to be added
in order to make the protocol more explicit about why the request is bad.
To that end, I think we should add 4xx status codes for Request-URI too long,
header field too long, and body too long (I think I already did that one).

 ...Roy T. Fielding
    Department of Information & Computer Science    (fielding@ics.uci.edu)
    University of California, Irvine, CA 92717-3425    fax:+1(714)824-4056
    http://www.ics.uci.edu/~fielding/

Received on Friday, 9 February 1996 17:17:14 UTC