Hi all

thanks for forwarding this - comments below.

Should I subscribe to ietf-http-wg?

Henrik Nordstrom wrote:
lör 2007-02-03 klockan 23:10 -0800 skrev S. Mike Dierken:

  
Much of the document describes a single use-case dealing with authentication
and NTLM specifically. Are there other specific situations other than
authentication that can be described? Are there other styles of
authentication other than NTLM that would benefit from a solution?
    

The draft is about two quite independent problems.
  
1. POST/PUT and authentication challenges. Mostly a problem for the
non-compliant connection oriented NTLM/Negotiate/Kerberos authentication
schemes, but may also be noticeable for Digest to some extent even if
just marginally so. To solve this the draft proposes a new 1xx response
type to indicate "please DO NOT send the request body, I am not
interested in what it contains", asking the client to diverge even more
from the standard message format. This is mostly a band-aid to make the
non-compliant connection oriented authentication schemes survive a large
PUT/POST as initial request. 
  
Yes, a definite band-aid.  Problem has been in definition of how long a client should wait for a 100 Continue.

RFC2616 talks about a method based on RTT for connection to the server.  However, when the server is a proxy
this RTT is usually extremely small, and the upstream RTT is shielded from the downstream agent.

The real issue with 100 is that it is a green light to send the content, but there are no hard and fast rules about how
long a client should wait before sending POST or PUT data in absence of receiving a 100 Continue.  This is the crux
of the problem, since the client cannot ever rely on ever getting a 100 Continue.

The other problem is the definition of using the "Expects" header.  The spec states that

a) a server must not send a 100 Continue if the client did not send an Expects header containing "100-Continue". 
Many web servers violate this and send it regardless, and in fact I haven't seen the latest versions of IE or firefox
ever sending an Expects header - possibly out of fear of what will happen if the server actually obeys the requirement
for unhandled expectations.

b) that any server receiving an Expects header it can't fulfil must bounce the request with a 417.

This makes the usability of the Expects header much more restricted.  Any client wishing to indication a desired
functionality must use a different header than "Expects", since "Expects" designates required functionality,
where the request must not be completed without it.  So this doesn't allow for indication of optional desired functionality

The proposed 102 (which will have to be a different code because of WebDav) is a red light.

A red light is fundamentally different to a green light.

Anyway, I did consider what alternatives there should be, and how the spec was intended to work in this respect.
But the spec really only considers auth to an origin server, not multiple auths along the chain to an origin server.

If you look at how IE and Firefox actually behave in this respect, you'll see the dilemma that their devs found with the
spec.

IE for instance sends a POST with Content-length: 0 if it believes it is about to be challenged for authentication.
It often gets this wrong however, and sends requests like this over connections that already have established credentials.
We had to add explicit code into WinGate to catch this case, where we have to re-start the auth process (only alternative
is to hard-close the connection).

however, IE (6 and 7) only does this the first time.  So that will allow it to authenticate only one link in the chain.  If there are 2 links
which is common, you still end up sending the content 3 times to the upstream server for NTLM, or 2 times for
Basic.  The more links the worse it gets.

Also IE doesn't do this if it's talking to a proxy, it sends the full amount from the start.

Firefox 1.x and 2 always sends the full amount each time.

The spec states that if a client receives some sort of error response whilst it is uploading content, then it should
stop uploading, and break the connection (since there's no way to resynchronise the connection).  This is fine if the auth
is basic, but no good for session-negotiated auth such as NTLM or anything else that uses a challenge-response.

Using chunking for uploads in this respect whilst it might allow a resynchronisation. I think you'll
find that this causes implementation problems.  I don't think many proxies even support chunked data from a client
no current clients that I've seen ever send chunked data, since they always know the content-length.

It would be a bigger re-write of the client to get it to send chunked data than it would be to have it honour a new
red-light code.  So you'll find more resistance to implement it, and more problems with faulty implementations etc.

Also a red light code is an explicit solution, rather than an implicit one.  The client is left under no doubt as to what
will happen and under what conditions it may send the body.   The 100-Continue does not provide this certainty.

So I still think a red-light code will be a simpler implementation for people, and the fallback is that we are in the same
situation we are now.

There are also benefits to be had from having the Content-length field in the requests from the beginning, since then
a server or proxy can use that information in its decision (i.e. disallowing uploads above a certain size).  Chunking would
not allow for that sort of functionality.

2. Servers taking a long time to finish processing a request before the
final response is known. To solve this the draft proposes a new
"Progress" header (both request and response) and reusing 100 Continue
to relay this to the user-agent. Very similar to the 102 Processing
response defined by rfc2518 but with a new header to provide additional
information to the end user explaining what is going on.

  
Actually the proposal (albeit maybe not too clearly) states that such a header could be returned in any 1xx message.
I was unaware of the WebDAV proposal in this respect, so the 102 defined there would be an obvious choice.

  
What other approaches have been considered? 
    

For the second problem (servers taking a long time to process a request)
RFC 2518 already defines 102 Processing. But the new "Progress" header
proposed in this draft adds valuable information which the user agent
may display to inform the user why the request is taking so long to
complete.

The first problem has a couple of alternative solutions, none of which
is any better..
  * If an auth challenge is seen, close the connection and then retry
with a 0-length body until the credentials can be provided.
  
this doesn't cope with more than one auth challenge in a session.

Many transparent or intercepting proxies are configured to require auth, so in such cases the proxy
sends back a WWW-authenticate field instead of a Proxy-authenticate one (since the client believes
it is talking to a server).  So a client can actually receive multiple 401 WWW-authenticate responses
from chained agents even though it thinks it is only talking to one server.

The browsers out there actually cope with this and prompt for next-hop authentication credentials.

  * As above, but instead use another temporary request to initiate the
authentication handshake, assuming one knows a such request in the same
protection space..
  
like for a dummy resource?  tricky to know what sort of request would be usable to set up auth
also, server may still close the connection once it thinks it has fulfilled the request, whether the proxy
maintains the connection to the client or not.  So I wouldn't pursue this path.

  * Probably other hacks is possible as well to get around the problem.

  
I couldn't think of any, and I thought about it for a long time.... that's why I wrote the spec in the end :)

It needs a bit of work, but I think the concepts are worth-while.  My support-desk certainly do.

  
Regarding a connection oriented authentication approach, if the client
submitted a request with no body (like a GET) would that be sufficient to
establish the credentials that could be used with a larger request later on?
    

It is. Problems arise if there is no open authenticated connection where
to send the request, and the request has a large request body.

  
Regarding large message bodies, would using chunked transfer-encoding with a
small initial chunk be useful to quickly 'probe' for these chained
authentication routes?
    

Not sure I follow what you think on here.

As for seeing that authentication is required before transmitting the
request body 100 Continue is quite sufficient.
  
Only for the immediate connection for the reasons above (being the client doesn't know how long to wait).

If the current spec was changed so that the client MUST wait for a 100 continue, we would be fine, but
it makes it optional on the part of the server and client.

Ah, now I follow. Yes, sending the request using chunked
transfer-encoding also allows getting around the PUT/POST authentication
problem very nicely. Just terminate the body immediately when seeing the
authentication challenge instead of 100 Continue.. this is how it should
be done. Nice catch.

So the draft should be reduced to just the new Progress header, suitable
to be used in the already defined "102 Processing" response, and
additionally implementation advice how to handle authentication
challenges to avoid having to repeatedly transmit the request body by
making use of chunked transfer encoding once it's known the resource is
a HTTP/1.1 resource, easily detected by the 401 challenge.
  
This creates also serious problems for HTTP/1.1 proxies talking to upstream HTTP/1.0 servers.

In such cases the proxy would need to spool the entire content locally before it could upload it
the next hop.  This then creates issues where the client sent the request really quickly to the proxy
but the proxy then takes ages to get it the next hop before sending a response. So the user is left
hanging wondering what is going on. The progress header would help here, but we find in general
it is best to manage flow-control end to end rather than hop by hop, then the progress indications
of the browser are applicable, the user expects to see the content take a while to upload, and then
shortly afterwards see some sort of response.  If we have to buffer everything in the middle, then
the user won't see a response until some indeterminate period of time after the upload has completed
according to them.

So I think whilst the chunked uploading may allow the existing spec to kinda cope with the problem
it's not an elegant solution which provides for the best user experience.

And users have a habit of complaining if their expectations aren't met.  educating them is very
expensive.

Regards

Adrien de Croy

Regards
Henrik