W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 2007

RE: [Fwd: I-D ACTION:draft-decroy-http-progress-00.txt]

From: Henrik Nordstrom <henrik@henriknordstrom.net>
Date: Mon, 12 Feb 2007 23:25:15 +0100
To: Adrien de Croy <adrien@qbik.com>
Cc: ietf-http-wg@w3.org
Message-Id: <1171319115.23049.139.camel@henriknordstrom.net>
tis 2007-02-13 klockan 08:51 +1300 skrev Adrien de Croy:

> The 100 continue is the intended solution to the problem, and whilst I 
> can see
> how it could be effective in direct client-server communications, there are
> issues with it when there are intermediaries or delays.

Not really. 100 Continue is end-to-end, not hop-by-hop. Just as your
proposed "Defer" status.

Your draft adds the new 1xx response not for flow control but for
terminating flows without terminating the connection. This is soft
abortions of request entities, not flow control.

The proposed approach in the draft have at least two major flaws making
it unsuitable:

a) There is no guarantee the client will actually wait for the 100
Continue response before sending the request body, especially not when
there is intermediaries involved which may significantly delay the
request or response, so the server may get a response body even after
sending the "abort" signal and this may then get misread as a different
request (i.e. a PUT of a object containing an HTTP request).

b) And since you are significantly changing the message formatting rules
based on end-to-end communication and not hop-by-hop you are guaranteed
to cause problems when there is intermediaries involved. For example
proxies MUST forward unknown 1xx responses, and this would change the
message format under the feets of the proxy without it knowing causing a
real mess for the proxy.

Both the above problems is related, but from different aspects of HTTP
and at different locations in the request forwarding path.

To guard from this you could in theory add a new "Expect: 1xx Defer"
condition with the added side effect that the client guarantees that it
will not transmit the request body until an 100 Continue is seen and
will instead close the connection and send the request again if 100
Continue is not seen in a timely fashion.


In my eyes, so far the chunked encoding approach looks the most
promising way to solve the authentication issue in a reasonable manner,
even if it means loosing the information on the request size.. And
loosing the information about the request size is frankly the only real
drawback of the approach (broken implementations set aside).

> a. issues when a proxy is connecting to an HTTP/1.0 server.  Unless it 
> knows
> apriori that the server is HTTP/1.1 compatible it can't send chunked 
> resource anyway.

Correct. But not really a big problem. 100 Continue solves the ugly part
of this and for authentication as it allows the client to reasonably
probe the server without sending the request body before it knows it
will get accepted by the server.

This whole thing is only a significant problem for NTLM (and Negotiate)
authentication as it can not close the connection on authentication
challenges and therefore MUST transmit the request body which will be
dumped to the bitbucked by the server..


On the initial request when the HTTP level of the next-hop is unknown
the client must use Content-Length, and will from the response learn if
the path is HTTP/1.1, if not neither approach of short-circuiting the
request body can be used and the client MUST resend the request body as
is done today.

To avoid transmitting the request body on the initial request the client
has to close the connection if seeing an authentication challenge or
other error.

In the next attempt in sending the request (possibly with updated
credentials) the client should connect to the same next-hop and assume
an HTTP/1.1 capable path if the last response indicated it's an HTTP/1.1
path. In these conditions it knows for certain that the next-hop is
HTTP/1.1, and that quite likely the whole request path is.

As long it knows the next-hop is HTTP/1.1 it's always safe to send
chunked encoding. In worst case the request may get aborted with a 4xx
forcing the client to fall back to Content-Length and resending the
request body in each roundtrip.. And chunked encoding is a requirement
for the client to clearly indicate to the next-hop that the body has
been terminated avoiding the problems indicated above.

To avoid frequent 411 responses it's probably best for the client to
first send the request with Content-Length and Expect: 100-continue
unless it's known the last path used to that server is fully HTTP/1.1
and that authentication is quite likely needed to finish the request.
Yes, this costs one TCP connection for the initial probe, but for most
uses the probe will not be needed as the server and next-hop is both
already known to the client.

> Clients are in the same boat, and that's why I think there aren't any 
> (that I have found) that send chunked data, since they would need to
> maintain a database of internet webservers to keep track of what
> server supported chunking or  not - a fairly low-return on investment.

Client's dont use chunked encoding today as there hasn't been much
benefit for them to do so.

The NTLM authentication mess is a good reason why to use chunked, and to
care to implement the HTTP/1.1 probing/ needed to do it safely..


Just because the "Defer" status code seemingly looks like it may
initially be less lines of code to implement in existing products does
not make a broken approach a good approach. In the end you'll end up
with about the same amount of code plus much stricter requirements on
when it may be used as it needs all intermediaries to support the new
feature for it to be used reliably and it's also plagued by exactly the
same "next-hop status unknown" issues.

> b. loss of information on which to base policy decisions.  Unless you 
> can set the
> content-length field as well?

You can't.

> c. implementation complexity -> compatibility issues with non-compliant 
> clients
> servers and intermediaries.  An additional status code for a client to 
> see is fairly
> low-impact, compared to servers and proxies suddenly seeing chunked 
> resource
> from a client.

True. As chunked encoding is rarely used in requests it hasn't been
tested much and there quite likely is broken implementations out there.
But I think it's safe to say that most if not all HTTP/1.1
implementations will simply abort the request with a 411, at least if
it's a PUT/POST request.

> Protocols have had 2 signals for flow control since year dot.  RS232 had 
> RTS/CTS
> Xmodem had X-on/X-off.

I don't see how this compares to your proposed extension. The proposed
"Defer" status is not a flow control, it's an abort condition.

X-on/X-off is in HTTP "Expect: 100-continue" and "100 Continue", plus
all the transport flow control on top of which HTTP runs.

Regards
Henrik

Received on Monday, 12 February 2007 22:25:26 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 27 April 2012 06:50:00 GMT