W3C home > Mailing lists > Public > ietf-http-wg-old@w3.org > January to April 1996

Re: don't use POST for big GET [was: Dictionaries in HTML ]

From: Albert Lunde <Albert-Lunde@nwu.edu>
Date: Fri, 9 Feb 1996 20:57:05 -0600 (CST)
Message-Id: <199602100257.AA024231025@lulu.acns.nwu.edu>
To: http-wg%cuckoo.hpl.hp.com@hplb.hpl.hp.com
> 
> Henrik Frystyk Nielsen wrote:
> 
> > The problem is more that many implementations use static buffers of, let's say
> > 1024 bytes for reading the HTTP request line. If the request line is longer
> > than the buffer then the application dies or is open for security attacks.
> > This was the case with a version of the NCSA server and also a version of the
> > Netscape client, I believe.
> 
> The problem you are probably refering to, only occured with
> really long hostnames.  The size of the URL (hostname excluded) is
> theoretically infinate.

The other issue raised in the old NCSA X mosaic 2.0 forms document
is that: "with the GET method, given the way many servers (e.g.
NCSA httpd) pass query strings from URLs to query
server scripts, you run an excellent chance of having the
forms content truncated by hard-coded shell command argument lengths."

I think we need to look more carefully at the nature of the interaction
with the CGI spec before specifying really large limits,
particularly since complex queries are a way to create long
GET URLs. We don't want to force implementations to choose
between CGI and HTTP compliance.

I also feel that if we are talking about current practice our numbers
should be grounded in reality.

-- 
    Albert Lunde                      Albert-Lunde@nwu.edu
Received on Friday, 9 February 1996 18:59:37 EST

This archive was generated by hypermail pre-2.1.9 : Wednesday, 24 September 2003 06:31:44 EDT