Re: [Fwd: I-D ACTION:draft-decroy-http-progress-00.txt]

Actually there could still exist a race condition that needs considering 
as Jamie pointed out, although the chances are very small.

There could be a race condition where the proxy sends anything back to 
the client other than a 100 continue after having
sent a 102 wait, and the client has given up waiting and started sending 
the request body.  So the proxy has sent an interim AND another response 
eliciting a response from the client, and the client has started to send 
the request body.

Until the proxy has sent a "100 continue" to a client that advertised 
flow control, it knows that anything it receives
from the client after the initial request headers is request message 
body.  This would only occur when the client gives up waiting
for a response from the proxy, which could occur over high-latency slow 
links.  As Jamie says, HTTP must work over
all sorts of links, not just fast low-latency ones.

So there could be a race condition where the proxy gets any other sort 
of response from upstream, or deems it
necessary to send anything other than a "100 continue" (after having 
sent a "102 wait please").  The timer proposed in
the request tag is of little use, since the proxy doesn't know how long 
the client has been waiting already, it doesn't
even know how long it's taken for it to receive notification of the 
incoming connect from its TCP stack (highly loaded
server can take a while between SYN packet, SYN ACK and FD_ACCEPT on a 
socket).

e.g

1. Client connects to proxy, sends request advertising flow control
2. proxy responds with 102 wait please, but it takes a while.
3. proxy connects up-stream and gets a 401 from the server
4. proxy sends 401 back to client.

At this stage, the client has received a 102, and a 401.  However, if it 
took too long to get  the 102, then
it may have started to send the request body.

If it starts sending the request body AND the proxy starts receiving 
from the client before the proxy sends the 401 back, we're fine because 
the proxy knows it hasn't sent anything to the client that warrants the 
client sending anything, and therefore the client must have timed out 
and therefore the data being received is request body.

If the proxy has sent a 401, or any other non interim result, then there 
is some possible doubt.

So there is a small overlap of time between the proxy sending the (102 
and 401) where if the client gives up waiting
there could be a problem.

OK, so what does this mean?

In order for this to happen, the latency between client and proxy must 
be high, and proxy and server must be low.  This happens
in common reverse-proxy scenarios.  E.g. where it takes at least twice 
as long for the 102 to get back to the client, than for the connection 
to be made up-stream, and that server to respond with an auth challenge.

The bigger the gap between a proxy sending a 102, and any subsequent 
response (such as 401, or 302), the less the possibility of this 
occurring.  Obviously we don't want to introduce arbitrary delays, 
although the section in RFC2616 for 100 continue does just that.

OK, so I think that the race condition is a very low likelihood, but a 
non-zero possibility.  I think the chances of such a race
condition coupled with the request body looking like the same request as 
before is worse than your chance of being hit by lightning twice.  But 
anyway.

The entity that can reliably detect the race condition is the client.  
If it receives a 102 wait please after having advertised flow control 
and after it has already started sending the message body because of 
timeout, it would need to abort the connection.

Considering we are talking about a scenario where the client is trying 
to send message body, then an abort and retry (but now in the knowledge 
that the proxy supports flow control), is still a great deal better than 
the current situation.

Regards

Adrien


Adrien de Croy wrote:
>
>
>
> Jamie Lokier wrote:
>> Adrien de Croy wrote:
>>  
>>> case 1, proxy requires auth, server does not.
>>>
>>> 1. Client sends request headers only (as per current RFC2616 
>>> suggestions) and goes into a short wait state
>>> 2. proxy sees that there will be a request body, so immediately 
>>> sends "HTTP/1.1 1xx wait please"
>>> 3. client terminates timer for when it would otherwise send request 
>>> body.  It does not send request body
>>> 4 proxy then sends "HTTP/1.1 407 auth required"
>>> 5 client establishes auth credentials with proxy, still not sending 
>>> request body6 proxy connects to upstream server.     
>>
>> But, if I've understood that, it does not work.
>>
>> This is what I think happens in your scheme:
>>
>>  1. Client sends request headers only.
>>  2. Proxy sends "HTTP/1.1 1xx wait please".
>>  3. Client receives "HTTP/1.1 1xx wait please" and does not send 
>> request body.
>>  4. Some time later, proxy sends "HTTP/1.1 407 auth requiered"
>>  5. Client sends new request headers to establish auth credentials...
>>
>> But there is a problem.  What happens if the "HTTP/1.1 1xx wait
>> please" is not received quickly enough by the client?  Current clients
>> will time out and send the request body:
>>
>>  1. Client sends request headers only.
>>  2. Proxy sends "HTTP/1.1 1xx wait please", but this takes a few
>>     seconds to transit the network due to delays.
>>  3. Client's "100 continue" timer times out, and it begins sending the
>>     request body.
>>  4. Some time later, client receives "HTTP/1.1 1xx wait please".
>>
>> The problem is, how shall the proxy parse the data stream from the
>> client?  In the first case, the data stream consists of request
>> headers indicating a request body, but no body, followed by more
>> request headers.  In the second case, the data stream consists of
>> request headers indicating a request body and at least part of a
>> request body.
>>
>> How can the proxy detect the start of the second request in each case?
>> It's not possible to distinguish request headers from the start of a
>> request body.
>>   
> hmm, good point.  There is a very small chance that the request body 
> could look like a normal request.
>
> Normal scenario is proxy on same LAN as client, so delays would be 
> very small.  If the proxy is
> across the internet, then delay would be in line with RTT to connect 
> to the proxy.  But still it's always
> better to have a completely deterministic method - with no room for 
> guessing.
>
> The problem you raise is easily fixed though, if we require the client 
> to advertise support for flow control
> with a tag in say the Connect header (or maybe another tag).
>
> In that case, the proxy only sends the 102 if the client advertised 
> support for it.  I wouldn't suggest using
> the expects header, since the proxy or server would then be required 
> to reject the request if it didn't understand
> rather than just ignore the tag (no graceful fall-back).  The 
> advertisement could also contain info about how
> long the client will wait before sending the message body if there is 
> no 100 continue or 102 please wait.
>
> It needs to be an advertisement that is safe to send to any server 
> without causing a rejection.
>
>> This is the reason why a client is only allowed to send a request
>> body, or abort the connection.  And this is why it is said the only
>> solutions are to use chunked request body (which is always sent, but
>> can be truncated), or for the client to send a request header which
>> means "I will definitely not send a request body unless I receive 100
>> Continue".  But both of those will fail with some servers and some 
>> proxies.
>>   
> yes, I think the expects header has some resistance to 
> implementation.  Due mainly to
>
> 1. difficulty in clients maintaining knowledge about whether servers 
> support 100 continue or not
> 2. possibility that any request sent with a expects header will be 
> rejected purely because of that header.
> 3. lack of support from HTTP/1.0 servers
>
> If all the web clients I've investigated this for actually used an 
> Expects header, this would be a moot point
> but I've never seen one sent... ever.
>
> Maybe the IE and Firefox teams are just a bit paranoid :)
>
>>  
>>> I should really draw up a flow chart of all this.  from the client's 
>>> point of view, it sends a request, and if it receives a 102
>>> wait please, it will then receive any number of:
>>>
>>> a. a 100 continue
>>> b. an auth challenge
>>> c. some failure code indicating the request is denied.
>>> d. connection closed (some failure condition)..
>>>
>>> until it has received a 100 continue however, it won't send the 
>>> request body.
>>>     
>>
>> But what about the case when "102 wait please" is sent by the proxy or
>> server, but isn't received by the client fast enough to make it wait?
>>
>> -- Jamie
>>   
>

Received on Wednesday, 14 February 2007 22:27:39 UTC