Re: don't use POST for big GET [was: Dictionaries in HTML ]

> > You could also say that servers may not silently truncate or ignore
> > URLs that are beyond their implementation limit; instead they must
> > return a specific error (e.g. "URL too long").  This keeps
> > implementations from taking the lazy way out and still being able to
> > claim compliance with the spec.
> 
> The problem is more that many implementations use static buffers of, let's say 
> 1024 bytes for reading the HTTP request line. If the request line is longer 
> than the buffer then the application dies or is open for security attacks. 
> This was the case with a version of the NCSA server and also a version of the 
> Netscape client, I believe.
> 
> Henrik Frystyk Nielsen, <frystyk@w3.org>

Henrick, having a limit is fine.  Siliently not detecting an
overflow is *not* fine.

In the case you cite, better programming (assuming one felt one *had*
to keep a static buffer) would be to see if a line terminater
was within the buffer.  If not, the entire request should fail.

This would give a maximum request size for that implementation of ~1020
bytes.

One cannot stop bad implementations, but at least one should be able
to prevent them from claiming compliance with the spec.  Specifying
the protocol behavior when implementation limits are exceeded is
a reasonable thing for a specification to do (IMHO).

	Neil Katin

Received on Friday, 9 February 1996 13:47:15 UTC