W3C home > Mailing lists > Public > ietf-http-wg-old@w3.org > January to April 1996

Re: don't use POST for big GET [was: Dictionaries in HTML ]

From: Henrik Frystyk Nielsen <frystyk@w3.org>
Date: Fri, 09 Feb 1996 16:22:04 -0500
Message-Id: <9602092122.AA23467@www20>
To: Neil Katin <katin@amorphous.com>
Cc: NED@innosoft.com, fielding@avron.ICS.UCI.EDU, http-wg%cuckoo.hpl.hp.com@hplb.hpl.hp.com
Neil Katin writes:
> 
> Ned said:
> > There are plenty of ways to approach this that do not require putting a hard
> > upper limit on URL size. For example, you could say that "all implementations
> > are required to support URLs at least 255 characters long and implementations
> > are encouraged to support URLs of as long a length as is feasible".
> 
> You could also say that servers may not silently truncate or ignore
> URLs that are beyond their implementation limit; instead they must
> return a specific error (e.g. "URL too long").  This keeps
> implementations from taking the lazy way out and still being able to
> claim compliance with the spec.

The problem is more that many implementations use static buffers of, let's say 
1024 bytes for reading the HTTP request line. If the request line is longer 
than the buffer then the application dies or is open for security attacks. 
This was the case with a version of the NCSA server and also a version of the 
Netscape client, I believe.

-- 

Henrik Frystyk Nielsen, <frystyk@w3.org>
World-Wide Web Consortium, MIT/LCS NE43-356
545 Technology Square, Cambridge MA 02139, USA
Received on Friday, 9 February 1996 13:25:49 EST

This archive was generated by hypermail pre-2.1.9 : Wednesday, 24 September 2003 06:31:44 EDT