W3C home > Mailing lists > Public > ietf-http-wg@w3.org > July to September 1997

Re: STATUS100 Re: Proposed resolution

From: Joel N. Weber II <devnull@gnu.ai.mit.edu>
Date: Fri, 18 Jul 1997 20:14:07 -0400
Message-Id: <199707190014.UAA12266@mescaline.gnu.ai.mit.edu>
To: gjw@wnetc.com
Cc: rlgray@raleigh.ibm.com, http-wg%cuckoo.hpl.hp.com@hplb.hpl.hp.com
X-Mailing-List: <http-wg@cuckoo.hpl.hp.com> archive/latest/3816
   Date: Thu, 17 Jul 1997 15:47:13 -0700 (PDT)
   From: "Gregory J. Woodhouse" <gjw@wnetc.com>
   X-Url: http://www.wnetc.com/

   To me, it seems like the real problem is that the server has no way of
   knowing how much data to expect. Accepting a chunked PUT or POST is an all
   or nothing type of commitment. I doubt it's possible in HTTP/1.1, but it
   seems to me that the server need to be able to indicate how much data it
   is willing to accept and then allow the client to decide whether or not to
   attempt to send the request. (A client may not know how much data it has
   to send, but it may know that it will not exceed a certain threshold.)

In general, I've been taught to write programs that don't have arbitrary
limits, so I think I would hate to write a server which places a limit
on the size of a POST request.

Perhaps for a search engine, it might make sense to create some restriction
on the size of such data; but I think no matter what you do you're
open to denial-of-service attacks to some extent.
Received on Friday, 18 July 1997 17:21:51 UTC

This archive was generated by hypermail 2.4.0 : Thursday, 2 February 2023 18:43:03 UTC