W3C home > Mailing lists > Public > ietf-http-wg-old@w3.org > January to April 1996

Re: don't use POST for big GET [was: Dictionaries in HTML ]

From: Neil Katin <katin@amorphous.com>
Date: Fri, 9 Feb 1996 13:44:18 -0800
Message-Id: <9602092144.AA04999@amorphous.com>
To: frystyk@w3.org
Cc: http-wg%cuckoo.hpl.hp.com@hplb.hpl.hp.com

> > You could also say that servers may not silently truncate or ignore
> > URLs that are beyond their implementation limit; instead they must
> > return a specific error (e.g. "URL too long").  This keeps
> > implementations from taking the lazy way out and still being able to
> > claim compliance with the spec.
> 
> The problem is more that many implementations use static buffers of, let's say 
> 1024 bytes for reading the HTTP request line. If the request line is longer 
> than the buffer then the application dies or is open for security attacks. 
> This was the case with a version of the NCSA server and also a version of the 
> Netscape client, I believe.
> 
> Henrik Frystyk Nielsen, <frystyk@w3.org>

Henrick, having a limit is fine.  Siliently not detecting an
overflow is *not* fine.

In the case you cite, better programming (assuming one felt one *had*
to keep a static buffer) would be to see if a line terminater
was within the buffer.  If not, the entire request should fail.

This would give a maximum request size for that implementation of ~1020
bytes.

One cannot stop bad implementations, but at least one should be able
to prevent them from claiming compliance with the spec.  Specifying
the protocol behavior when implementation limits are exceeded is
a reasonable thing for a specification to do (IMHO).

	Neil Katin
Received on Friday, 9 February 1996 13:47:15 EST

This archive was generated by hypermail pre-2.1.9 : Wednesday, 24 September 2003 06:31:44 EDT