W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 2008

Re: CONNECT message including tunneled data

From: Jamie Lokier <jamie@shareable.org>
Date: Thu, 31 Jan 2008 08:36:44 +0000
To: Robert Siemer <Robert.Siemer-httpwg@backsla.sh>
Cc: ietf-http-wg@w3.org
Message-ID: <20080131083644.GA4003@shareable.org>

Robert Siemer wrote:
> That raises the question: May a successful response overtake the 
> request? The spec has only a POST-rejecting example which does that. And 
> what about if the response measures up to the request? For example an 
> "echo POST", which can answer as far as the request is. - If the request 
> terminates, the response will, too.
> I see no restrictions in HTTP for that behaviour, does anybody else?

I see no restrictions in HTTP, provided nobody minds being unable to
send an error response if a later part of the request causes one.

I investigate actual client behaviour a few years ago, because I
was interested in making a server which passes the request data
through a filter and streams the filtered result back as a response.
Also, for a server which returns a progressive HTML page showing
"upload progress".

What I found is many different client behaviours:

  1. A few clients can handle overlapping non-error response and requests.

  2. Many clients will not read the response until they have
     transmitted the whole request - at least enough that the request is in
     the TCP/IP system.  (Therefore the difference is only detected with
     large enough requests).

  3. Some clients will abort the request on receiving part of the
     response.  This is recommended for error responses, but these
     clients do it for "200 OK" as well.  My notes say Mozilla 1.2 but
     I didn't test it myself, plus some version of cURL.

  4. Most clients distinguish error responses: abort if it's an error
     response, and either behaviour 1 or 2 for non errors.

This means a general purpose server should not opportunistically
transmit the response early, as some clients will abort the request.
However, if it can determine that the client won't abort - or if such
clients are long gone - servers can transmit the response but must be
prepared to avoid deadlock by reading the whole request while the
response transmission is blocked.

For "upload progress" in a web browser, I found that people are using
a different method: separate IFRAMEs for the upload and progress bar,
to put the streamed response on a different HTTP connection than the
upload.  Even then, some web browsers will not display anything while
an upload is in progress (very annoying for large uploads).

> CONNECT could be such a "catch up" method.

I agree with you in principle: I always thought that would be cleanest
semantic for CONNECT, and remove a special case in the message
boundary rules.  And it would mean a CONNECT could end without closing
the connection.

Provided the client and server agree, I don't see a problem with
overlapping request and response in the HTTP specs, and potentially
unbounded streaming.  You can already do it today with POST.

(In fact I already do something like that in the two-way + multiplex
extension of HTTP used in my own application.)

However, keep in mind it requires the client and server to handle
chunked and other transfer-encodings (unless they agree not to use
any), which means they cannot simply set up the CONNECT and then step
out of the way, passing the socket directly to an application (plus
the initially buffered data).

-- Jamie
Received on Thursday, 31 January 2008 10:07:53 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 1 March 2016 11:10:44 UTC