W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 1996

Re: don't use POST for big GET [was: Dictionaries in HTML ]

From: Larry Masinter <masinter@parc.xerox.com>
Date: Sun, 4 Feb 1996 16:11:36 PST
To: dwm@shell.portal.com
Cc: connolly@beach.w3.org, http-wg%cuckoo.hpl.hp.com@hplb.hpl.hp.com, html-wg@w3.org
Message-Id: <96Feb4.161138pst.2733@golden.parc.xerox.com>
Since both ways of representing queries-- (GET with ? in URL) vs.
(POST with multipart-form data) have the same default a priori caching
behavior (i.e., don't cache, results are unlikely to be in cache), how
is the example a differentiator as to which is a better representation
for query requests?

> At best, to redefine long understood semantics of GET and POST would 
> cause the country and its users harship and frustration.

But there is no distinction in the "long understood semantics" (well,
not really semantics but operative behavior) between "GET with ? in
URL" and "POST", so choosing POST over "GET with ? in URL" shouldn't
cause anyone any additional "hardship or frustration". The "hardship
or frustration" is a bit of hyperbole in any case; the optimization is
that for non-cachable requests, it would not send the request up the
hierarchy in a hierarchical cache setup.

> A new method or redefining (I read the current HTTP drafts either way
> on the issue of content with GET) GET seems to provide more compatibility.

The issue of "compatibility" is not "compatibility with current
drafts" but "compatibility with current practice" for which the
current draft (only one for HTTP/1.0) in an attempt to capture.

> Depending on existing proxies and how they might now handle GET with
> content-length != 0, this might just boil down to an issue between the
> sending User Agent and the server and not require WWW wide deployment before
> it can be used in applicationas between cooperating client/server pairs.

It's not just proxies, it's servers. At least my reading of the source
code of the CERN server (/WWW/Daemon/Implementation/), plexus seem to
indicate that they just throw away the data after a GET, and don't
even pass it along to CGIs.

The Apache server seems also to exhibit this behavior: if you start to

GET /cgi-bin/test-cgi HTTP/1.0 
Content-type: text/plain 
Content-length: 300 
then before you can actually send any data, the server returns.

So, before you could actually introduce any content into GET requests,
you'd have to rev all the existing servers. This is what I mean when I
say that "GET with content isn't backward compatible", that is, even
though the grammar for HTTP might lead you to believe that you could
supply content with GET, it wouldn't work with most of the existing
servers. Of course, my sample could be wrong here...

Anybody know a HTTP server in use on the net that actually would read
content in a GET request and pass it on to a CGI program?

On the other hand, POST with data works against all existing servers,
and an application that used <FORM METHOD=POST
ENCTYPE=multipart/form-data> would be compatible with clients that
implemented RFC 1867 if those clients also implemented typed entry in
charsets other than ascii, and would be compatible with clients that
didn't implement RFC 1867 at least for ASCII data.

So, I don't know why I shouldn't recommend that form designers use
<FORM METHOD=POST ENCTYPE=multipart/form-data> for forms if they're
willing to accept non-8859-1 encodings.
Received on Sunday, 4 February 1996 16:15:11 UTC

This archive was generated by hypermail 2.4.0 : Thursday, 2 February 2023 18:42:57 UTC