W3C home > Mailing lists > Public > ietf-http-wg-old@w3.org > September to December 1994

Reading Request Object Data

From: Mike Cowlishaw <mfc@vnet.ibm.com>
Date: Fri, 2 Dec 94 10:50:27 GMT
Message-Id: <9412021050.AA21555@hplb.hpl.hp.com>
To: http-wg%cuckoo.hpl.hp.com@hplb.hpl.hp.com
Roy Fielding writes:

>>> It goes something like this:
>>>
>>>   a) If message includes Content-Length, use it.
>>>
>>>   b) If message uses an as-yet-undefined packetized Content-Transfer-Encodin
>>>      then that encoding may define an EOF marker.
>>>
>>>   c) If message uses an as-yet-undefined packetized Content-Encoding,
>>>      then that encoding may define an EOF marker.
>>>
>>>   d) If message is of type multipart/*, the effective object body ends
>>>      when the boundary close-delimiter is reached.
>>>
>>>   e) If the connection closes, the object body has ended.
>>>
>>> Part (b) is along the lines of Dan Connolly's www-talk proposal of
>>> 27 Sep 1994 (Message-Id: <9409271503.AA27488@austin2.hal.com>).
>>
>> I'm happy with the situation for responses (connection closes is quite
>> good enough), but still very unhappy with this definition for the data
>> (object body) for requests (PUT and POST).  The steps you just
>> outlined are not defined sufficiently well to be implementable (and
>> requiring every server to implement the rather gawky multi-part stuff
>> just in case data comes in that way seems unnecessary).  Is not
>> Content-Length essentially always present, in current practice, for
>> PUT and POST?  In which case, why require more that this, or
>> alternatives, unless truly necessary?
>
>The intention is to move away from "connection closes is quite good enough"
>so that future versions of HTTP can support a connection keep-alive.
>Multipart types have always been possible to implement -- its just that
>server and client authors have neglected (in the past) to read the MIME
>spec and thus understand that these things exist and how they are implemented.
>Including these descriptions in the spec is a way to prod people into
>implementing something that should have been supported long ago.
>
>Whether they stay in the spec, get moved to an appendix, or get shoved
>off to HTTP/1.1 is a question for the group to decide.

Thanks for the response .. and I applaud the intent to move away from
'connection closes is enough' in the future.

However, you really haven't answered my questions for today that would
enable me to implement a server from the HTTP 1.0 specification.  In
particular, what is a conforming HTTP 1.0 server required to implement
in order to read the object data for PUT and POST?  Specifically:

(a) Content-Length: in bytes [I assume this one is required]

(b) Packetized C-T-E: might well be useful, but only if the CTEs are
    defined.  Do I have to support PC-DOS ZIP?  Do I have to support
    Unix GZIP?  If not defined and required, then clients cannot use
    them, and hence they are not useful.  [Also, one would need a
    statement about how (a) and (b) interact.]

(c) C-E: Ditto -- only plain Binary is even suggested, at present, so
    there isn't anything to implement, I think?

(d) Multipart: Is this current practice?  If not, I trust that it's
    not required in 1.0.

(e) Closed connection: [Doesn't apply for Requests.]

rom the above, my inference is that an HTTP 1.0 server need only
implement (a).  Any more would be wasteful processing.  Am I correct?

Thanks -- Mike
Received on Friday, 2 December 1994 04:02:43 EST

This archive was generated by hypermail pre-2.1.9 : Wednesday, 24 September 2003 06:31:08 EDT