RE: usability of 100-continue, was: HTTP2 Expression of Interest : Squid

It's not broken as long as we are OK that the connection has to be closed upon expectation rejection (for non-chunked entity-body).

Based on my understanding, the client must close the connection if the server replied with a 4xx [1] against a non-chunked request.  Which makes sense as it ensures message integrity on a persistent connection [2].  Essentially, we break the ambiguity by having the server assume that the next byte (after the CRLFCRLF) will be entity-body.  If the client timed out and blasted the entity-body, then the server can drain it and drop on the floor so as to start parsing the next request.  If the client didn't timeout then it can close the connection and start a new one for the next request (or perhaps even deliberately send the entity-body if it believes it is cheaper to send the irrelevant entity-body instead of warming a new connection).  If the entity-body was chunked, then the client can 0-terminate it and start the next request [1].

My concern was whether using 100-continue as the mechanism for an auth-probe is really the best approach.  It might make the world simpler if, in HTTP/2.0, 100-continue didn't have timeouts, which would also increase connection persistence (because after server sends 4xx, server would deterministically know that the next received byte will be the start of the next request and the client is not forced close the connection).  Sending a POST with "Expect:100-continue", getting a 401 back and then having to close the connection seems poor [3].

I don't think we need to design a solution right now, but instead just realize that some solution for auth-probing might be a good idea.

[2] "In order to remain persistent ...":
[3] I realize this may not be relevant if initiating a new request/stream/channel/etc is super cheap.

From: Roberto Peon []
Sent: Tuesday, July 17, 2012 2:13 PM
To: Julian Reschke
Cc: Willy Tarreau; Osama Mazahir; Gabriel Montenegro; Adrien de Croy; Poul-Henning Kamp; Amos Jeffries;
Subject: Re: usability of 100-continue, was: HTTP2 Expression of Interest : Squid

Clients want to have small timeouts when possible, partially because of poor behavior of NATs when they lose the mapping and begin black holing traffic.
Its all a race.

On Tue, Jul 17, 2012 at 2:04 PM, Julian Reschke <<>> wrote:
On 2012-07-17 22:59, Willy Tarreau wrote:
On Tue, Jul 17, 2012 at 10:31:08PM +0200, Julian Reschke wrote:
On 2012-07-17 21:45, Osama Mazahir wrote:
As it is currently, 100-continue is problematic.  The situation is
tricky because the client is not forced to wait for the 100/417/4xx
(i.e. client is allowed to timeout and send the entity body).  Thus, the
server does not have a deterministic way to now if the next byte after
the double CRLF is the first byte of the next request or the first byte
of the entity body (of the initial request).  This results in
connections getting closed in various edge/error cases.

100-continue is almost there but if we wanted to use it in a robust
manner in HTTP2 then I think we would have to revisit its specification.

Well, we are revising RFC 2616, and if something is broken here we
should consider to fix it. Or, minimally, document the problem.

If I understand correctly, this will happen if the client sends "Expect:
100-continue", the server is slow to return an error status, and the
client decides to give up waiting for the 100 status, and continues?

I don't see how it is possible to send a next request without first sending
the entity body, the message is not complete until it has been sent as a
whole; the problem could only happen if the server wished to reject the
expectation (4xx).

Exactly. So why, *in practice*, would it take the server so long to return the 4xx?

(Just trying to understand whether this is a problem in practice, and if it is, what we could do about it -- recommend a minimal timeout?)

Best regards, Julian

Received on Tuesday, 17 July 2012 22:33:35 UTC