W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 2010

Re: Past Proposals for HTTP Auth Logout

From: Bruno Harbulot <Bruno.Harbulot@manchester.ac.uk>
Date: Thu, 25 Feb 2010 20:20:23 +0000
Message-ID: <4B86DB87.8080105@manchester.ac.uk>
To: Tim <tim-projects@sentinelchicken.org>
CC: ietf-http-wg@w3.org
Hi Tim,

Tim wrote:
> Hi Bruno,
>> There was a similar discussion on rest-discuss a few days ago:
>> http://tech.groups.yahoo.com/group/rest-discuss/message/14856
> Thanks for the link, I'll take a read through.
>> Alan Dean was looking for something that would work with Digest
>> auth. I just don't see that being possible with the current
>> XMLHttpRequest specifications (since you can't specify Digest
>> premptively).
> Well, actually in HTTP authentication, user agents are supposed to
> request the resource without credentials initially.  Once prompted
> with a 401, they then know what HTTP authentication mechanisms are
> supported by the server and can choose which to use.  There is
> actually no other way to implement Digest authentication without this
> initial 2-step process since the browser needs the Nonce and other
> information to get started.

Yes, you're right (and what I said was confusing).
The problem is that Basic authentication can be preemptive (I suppose a 
user agent could know it has to authenticate via Basic auth preemptivly 
and RESTfully if it's somehow indicated by some hypermedia... debatable).

The XMLHttpRequest via username/password seems to do preemptive Basic 
authentication implicitly. What I was discussing there was about Digest 
authentication via XMLHttpRequest, which I don't think is possible 
because of this.
Indeed, XMLHttpRequest should probably try first to see which challenge 
scheme it gets, but it seems it doesn't.

> In the application I put together (using tidbits of information from
> many of the same sources mentioned in that thread and elsehwere),
> browsers do indeed request the resource first without credentials,
> receive a 401, then turn around and try the credentials given in the
> open() method.

I've just run your test with HttpFox (after closing/reopening the 
Firefox), and I don't see any 401. It does the authentication preemptively.
(Going onto the private resource directly also triggers a browser popup 
as expected.)

> As far a specifying which HTTP auth scheme to use in JavaScript, no
> there isn't a way to do this.  Ultimately, the JavaScript in this
> approach must be supplied by the same server doing the authentication
> (due to same-origin restrictions on XMLHttpRequest), so it's really a
> moot point.  That same server advertises the available methods in the
> next request.

> I definitely see the desire to restrict the authentication method due
> to man-in-the-middle attacks, but specifying the auth scheme in
> JavaScript buys you nothing here.  Someone can just MitM the
> JavaScript.  As mentioned in my paper, the only way around this kind
> of downgrade attack is for browsers to know a priori which scheme is
> supported.  Since this isn't possible with current protocols, I
> suggested browsers caching which sites support stronger auth schemes
> when they are first accessed and then requiring those auth schemes be
> used in the future.  See the paper for details.

Yes, indeed. (I also need to read you paper in more details.)

>>> I generally wish there was a "WWW-Authenticate: Form" mechanism (or
>>> some form of security token, or cookie like this IETF draft), but for
>>> this to be effective, it would need to be implemented in major
>>> browsers.
> I don't think this is a good idea, personally.  It seems to be
> directed at making badly designed authentication standardized.

There is a case to be made for more general tokens, whether we call them 
cookies or not. It's not just about forms.

There are a number of other forms of authentication that would benefit 
from this. I'm thinking in particular of the SAML variety.
Even with its HTTP redirect binding, you more or less need some sort of 
cookie to establish an authenticated session, so as not to be redirected
all the time. The HTTP Post binding is probably even worse: you post an 
entire SAML Response to "log on".

Browsers could potentially support some "WWW-Authenticate: SAML" scheme, 
but I don't see that happening any time soon. I'd guess when this 
happens, SAML will be out of fashion and replaced with something new.

This is not about standardising bad practices. It's just that there are 
cases that Basic and Digest can't handle. I'd go as far as saying that 
for an authentication mechanism to be secure, you need some form of 
preestablished negotiation and session.
The important point here, for the HTTP mechanism to work, is that this 
negotiation ought to be integrated with the 401 status code and 
associate headers, like Digest does.

While 'WWW-Authenticate: Cookie' may be about standardising the (bad) 
cookie practice. Having something well defined for a shared secret would 
be a good idea, I think.

>>> I've also tried to suggest a "WWW-Authenticate: Transport" (or
>>> some other name), mainly for TLS client certificate
>>> authentication, but it didn't go very far (I'd need to improve the
>>> idea).
> You should look at the Mutual authentication scheme proposed by Yutaka
> OIWA (mentioned in this aging thread).

Thanks for the pointer.

>> It seems there's a discussion in the HTML5 WG about accessing
>> cookies from HTML, but I haven't followed it. I'm not sure how good
>> an idea this is. Such mechanism could enable AJAX forms to set the
>> authentication cookie/token perhaps.
>> I'd prefer a solution that has a clearly separated authentication
>> scheme (rather than using 'Cookies' at all, have a separate
>> authentication token store in the browser, capable of login/logout),
>> but the 'WWW-Authenticate: Cookie' scheme seems it could be a
>> reasonable compromise.
> I don't think any of this complexity is necessary.  With one very
> simple, backward compatible change to HTTP, the ability to do a log
> out, one already has all of the pieces to do very powerful and secure
> forms-based session management.  That is, if the XMLHttpRequest
> proposed standard is adopted as-is with respect to 401 handling.  This
> then opens up the possibility using dozens of possible HTTP
> authentication schemes which are much better than digest auth.

(Is there anything in XMLHttpRequest about trying first without 
preemptive Basic when the username and password are specified? I can't 
find it.)

>> I'm not sure being able to log out from Basic/Digest auth is an HTTP
>> issue; it sounds more like a issue of browser interface and/or
>> interaction between the webpage and the browser's handling of
>> authentication: HTML 5 might be a good place to discuss this.
> Take a look at the reasoning presented in the rest of this thread.  
> In summary: 
> HTTP authentication allows one to log in, but why doesn't allow one to
> log out?  Application developers need to be able to drive the log out
> process, they can't rely on browser user interfaces.  (There's nothing
> wrong with browser-driven log outs, but we need application-driven
> ones as well).

Fair enough, but that's about having the application driving the HTTP 
authentication mechanisms altogether (Basic/Digest/Negotiate/...), which 
are currently reserved to the browsers, logout being only a part of it.

> One could place log outs into HTML or JavaScript standards, but why
> the asymmetry?  What about user agents that don't support JavaScript?
> What about automated user agents that don't deal with HTML content at
> all?  HTTP-driven log out makes sense in more situations than the
> other approaches.

I'm not saying there shouldn't be an HTTP logout from the browser, I'm 
just saying it's not the domain of the HTTP spec. As you say, it's 
application-driven, and that application is HTML-based. It's a matter of 
interaction between the hypermedia and the user-agent, which only then 
interacts via HTTP.

>> I do think, however, that there's room for new "WWW-Authenticate"
>> schemes: something for 'Cookies' (or generic auth token) and
>> something for 'Transport' (to indicate that the authentication is
>> done out of the HTTP scope, e.g. via the underlying SSL/TLS stack).
> But like I said, this is just standardizing bad practices.  See the
> paper.  By inventing new WWW-Authenticate methods which just continue
> to use cookies, you through out several good HTTP auth schemes and
> have to reinvent the wheel based on limited primitives (cookies).

My concern behind 'WWW-Authenticate: Transport' was more related to 
SSL/TLS client-certificate authentication. (You don't seem to mention it 
much in your paper.)

The problem is that, currently, there's no way for an HTTP application 
(by this I mean the service that's running on top of the TLS layer) to 
say that it would like a client certificate, or that an unsuitable 
certificate was presented. It's all done at the TLS layer.
To request a client-certificate, you either have to configure the socket 
in advance or renegotiate (although it's effectively done transparently 
from the HTTP layer, hence the TLS renegotiation problems).
If no certificate is presented, their is no way to present another 
challenge (or even just an explanatory message) associated with a 401 
status (the server may even be configured to close the connection).
If the certificate isn't suitable, the TLS handshake will just close the 
connection abruptly. If it's accepted at the TLS level, but not what the 
HTTP application was after, there's no way to tell the browser it should 
try to present another one.
What's above talks about certificates, but the same could apply to 
Kerberos cipher suites (not talking SPNEGO here).

There simply just is a kind of authentication that's not in the domain 
of HTTP, but underlying it, hence this 'WWW-Authenticate: Transport' 
suggestion (could be another name...).

Best wishes,


Received on Thursday, 25 February 2010 20:20:53 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 27 April 2012 06:51:16 GMT