W3C home > Mailing lists > Public > ietf-http-wg@w3.org > July to September 2009

Re: 100 Continue and Expects

From: Henrik Nordstrom <henrik@henriknordstrom.net>
Date: Sun, 19 Jul 2009 02:03:28 +0200
To: Adrien de Croy <adrien@qbik.com>
Cc: HTTP Working Group <ietf-http-wg@w3.org>
Message-Id: <1247961808.32063.23.camel@localhost.localdomain>
sön 2009-07-19 klockan 10:16 +1200 skrev Adrien de Croy:

> I guess I'm mixing auth with 100-continue and expects.  The reason for 
> this is that a frequent reason why it may be inappropriate to send the 
> full body is because of auth.
> 
> So in my mind they are strongly linked.  In fact apart from redirects, 
> auth is the only case I can think of.

connection-oriented auth is the only when request bodies with
Content-Length is a significant problem, as it can not abort the request
and continue the auth process.

For all other cases it's always possible to abort the request and
continue on another connection. Redirections, auth, errors, whatever.

If chunked encoding is used the request can always be aborted mid-flight
without closing the connection and is not a problem.


> easier said than done.  when people want to use NTLM, try getting them 
> to use something else.
> 
> Obviously digest is the other option.

Digest is surely one option.

Another is having another secure message oriented HTTP auth scheme
deployed. Sure it will take some time, but certainly not impossible.

A suggestion for such auth scheme if someone wan't to work on fixing
HTTP authentication is something session key oriented with pluggable
auth method, allowing for NTLM/Kerberos/plain/whatever when initiating
the session. I.e. in some way similar to Digest MD5-sess but somewhat
decoupled from the actual authentication.. (and also using more secure
hashing).

> OK, that makes sense thanks.  But does that mean the client already gave 
> up waiting?

Yes, as it otherwise is not allowed to send any of the request body.

> > and servers not implementing HTTP for those scripts..
> >   
> OK.  Some server architectures make this difficult for them, I imagine 
> it's pretty wide-spread, and depends on the scripts.

I imagine those are getting more and more rare actually. A server which
runs scripts without parsing the response and framing it as an HTTP
response is easily a target for response-splitting attacks, with it's
associated cache poisoning effects.

> A server needs to parse and fix script output to fix this.

Yes.

And a server using CGI always does as the CGI script output is not HTTP.


> > There is quite many using it today. And many of those isn't prepared to
> > deal with expectation failures.. (learnt the hard way when implementing
> > Expect...)
>    
> that puts the proxy in a difficult position.  Is it allowed to send the 
> 100 continue itself?

Not really no.

> Or are there other expectations now than only 100-continue which these 
> clients rely on?

Haven't seen any.

> I can't really see the point of relying on 100 continue.

Funnily those clients I encountered doesn't... as they also implement
the timeout as well. What they failed on was to retry the request
without the expectation when getting a 417 Expectation Failed
response...

Relying on seeing 100 Continue is quite fine if you know the next hop
and origin is both HTTP/1.1, you can then use a fairly large timeout
only to catch the occasional unexpected forwarding path change which
happens to route the request via a HTTP/1.0 proxy where it did not
before...

Waiting for 100 Continue has benefits in
  - Less bandwidth waste
  - Errors actually reaching the client and not getting lost in a TCP
Reset.
  - Less demanding on servers as well as they don't need to linger so
much on "aborted" requests after sending an error or redirect status.

Regards
Henrik
Received on Sunday, 19 July 2009 00:04:12 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 27 April 2012 06:51:07 GMT