RE: issue of initiating client auth for parallel SSL sessionids

Hi Peter,

 

I respectfully have to disagree with you that the server controls the
handshake process. This is because TLS is a client-initiated protocol, which
always begins with the ClientHello. The server's ability to negotiate a
security association is dependent on the parameters first specified by the
client, which include the SSL session ID. If a client only ever provides a
single session ID (and most browsers simply implement their SSL session
cache by storing a single session ID in a map keyed off host/port being
accessed), then the server will only ever be able to resume that single SSL
session ID. The server can certainly choose to reject the session ID,
causing a new, full SSL handshake to take place and a new session being
established, but beyond that it cannot influence the SSL session ID the
client may propose. Few, if any, user agents/browsers are capable of
maintaining parallel secure sessions, with one session authenticated via a
smart card credential and the other identified via a WebID, to the same
server. Because of this, if you wish to both authenticate a user (via their
smart card) and somehow identify them (via their WebID), then you absolutely
need to be speaking about SSL renegotiation, or you have to be talking about
stateful, application-specific knowledge (such as HTTP cookies)

 

However, rather than diverging into a discussion of the multiplexing of
HTTP, SSL sessions, cookies, and concerns such as multithreading, I'd more
like to focus on your statement that "We have to ensure the WebID protocol
works with mixed browsing". To be clear, my understanding of the "mixed
browsing" that you're talking about is specifically requesting resources
over both HTTP and HTTPS. If I'm mistaken, please clarify what is meant by
the term there. As it stands, you cannot mix both HTTP and HTTPS requests
while also being able to relate a particular (HTTP) request to a given
identity. If you wish to securely authenticate an identity, you must be
performing it over HTTPS, exactly because of all the "HTML, HTTP, and web
threats" seemingly dismissed.

 

In your original message, you stated "As one moves through the site, the SSL
session id (due to webid protocol) can still guard access using an
authorization logic". However, if those requests aren't happening over
HTTPS, then they are not guarded/authenticated/authorized in any way. If
they are happening over HTTPS, then there is no need for a special file or
header - the TLS session itself provides all of the identity and security
assurances necessary. If you wish to map a single user agent to multiple
identities (smart card, WebID), then the server must be prepared to perform
SSL renegotiation for any request it receives over a given communication
channel.

 

Another piece that concerns me is how it relates to Issue 18 in the tracker.
If the constraint of HTTP+TLS as the transport between the Identification
Agent and the Verification Agent is removed, so as to allow arbitrary
application protocols, then it would seem that HTTP-specific semantics don't
fit in the specification. Any dependency on application-layer specific
behaviour seems like it speaks to a weakness in the protocol itself, and
should be solved in an HTTP-agnostic way.

 

I feel that I have a very good grasp of TLS, smart cards, HTTP,
authentication, and security, and must admit that I'm quite confused as to
what exactly you're proposing and trying to accomplish, which is why I was
hoping for more explanation, perhaps with a concrete workflow. As it is,
you've proposed a means of maintaining connections as multiple simultaneous
client identities (something most browsers do not support), a magic file
(File X, aka .crt) which is meant to convey some piece of information that
isn't immediately clear, a special MIME type sent in the Accept headers also
meant to convey some contextual piece of information, and a means of
authentication over HTTPS for HTTP resources, a path that seems to go
counter to the path being taken by seemingly every new security protocol. 

 

I'm not trying to dismiss or misrepresent the proposal, I'm just trying to
understand in very concrete terms what you're proposing as the expectations
are for an Identification Agent that wishes to speak HTTP as the protocol,
and the security guarantees that are afforded.

 

Thanks

 

From: public-xg-webid-request@w3.org [mailto:public-xg-webid-request@w3.org]
On Behalf Of peter williams
Sent: Saturday, February 26, 2011 10:42 PM
To: 'Ryan Sleevi'; public-xg-webid@w3.org
Subject: RE: issue of initiating client auth for parallel SSL sessionids

 

Good.

 

We hopefully all know that browsers today show a "mixed security" warning,
when an HTML page "container" retrieved over https has images (and scripts,
and css pointer) on URIs that are sometimes http, sometimes http. Sometimes
those linked images (and scripts etc) are on the same server stem as the
HTML page container, sometimes not. If a javascript call back opens up an
https URI, this doesn't even get a UI warning, being outside the DOM
security model.

 

https has to address this. Webid protocol (as a revision/profile of https)
has to address it.

 

I've general found, few folks understand what happens in the https protocol
- as it "multiplexes" multiple channels as a necessary consequence of
working with hypermedia docs (and linked data , in general).

 

Folks need to recall that a browser can maintain multiple parallel security
channels with a website. Its not only that there may be multiple SSL
connections outstanding (all keyed off a single SSL session/handshake).
There may be 2 or n "groups" of connections to the one site, where each
group of connections (all images, all scripts, all. say) cues of 1
particular session handshake. Perhaps, 2 sessions, and 8 connections, 4
connections per session perhaps.

 

Each handshake has a distinct SSL sessionid - and different client certs may
be in the state of that sessionid. 

 

Who controls this handshake process - that defines an SSL session, and gets
a sslsessionid?

 

The server. The server can decide to maintain 2 parallel SSL sessions (each
with 4 connections say), and decides on each session handshake which CAs to
request, and if client authn is required (or optional). If the images URIs
need one assurance, it may say: only VeriSign certs are good (where the
CA.cert forces uses of a browser smartcard or eID card, say). If the script
URI need another (because of its object security policy), it may send 
no CAs" when requesting client authn, allowing self-signed certs to be
selected by the user in the browser selector (and use software crypto, vs
the smartcard say).

 

This is the topic I want to get on the table, somehow. Its distinct from
multiple handshakes on a given connection (secure resumption, or multiple
handshakes on the same TCP/IP channel that revise the session
requirements/keying).

 

The related topic is that the metaphor of "click login" to move into https
mode for user authn (during which client authn might deliver a client cert
which name maps onto a server-side account, and thus sets a CGI security
context). We need to be MORE than that "modal" use of using client authn. We
have to ensure webid protocol works with "mixed browsing", not only the
modal login sequence.

 

On HTML, HTTP and web threats , I'll say nothing. First, let's focus on
secure communications and channel theory, since its properly understood for
years. Web threats. DUE TO linking and open hyperlinking (and exploit of
interpreted javascript) is a different topic. That is addressed with signed
javascript (coming soon, I feel, similar to signed activeX or signed java
applets).

 

 

 

 

 

 

 

 

From: Ryan Sleevi [mailto:ryan@sleevi.com] On Behalf Of Ryan Sleevi
Sent: Saturday, February 26, 2011 6:49 PM
To: 'peter williams'; public-xg-webid@w3.org
Subject: RE: issue of initiating client auth for parallel SSL sessionids

 

Hi Peter,

 

It may help me to understand what you're proposing if you could describe the
request flow using the HTTP semantics. I'm having a bit of trouble
understanding your proposal, and that's making it hard to evaluate the
security implications. Something like the simple sequence diagram at [1]
would help greatly.

 

My concern is that you're proposing that a user agent perform the WebID auth
sequence over HTTPS/SSL, but then continue the browsing session through
unsecured HTTP. This seems to defeat any guarantee of secure user
authentication, which is why I'm wanting to make sure I've understood
correctly.

 

Two example attacks that would make such a proposal untenable are the
injection of malicious scripts [2] or session hijacking [3]. The requests
received over HTTP cannot be assured of the WebID accessing them, since the
connection may be MITMed, and likewise, requests received over HTTPS may
have been initiated by malicious script running downloaded via HTTP.

 

Further, the idea of maintaining two independent SSL session IDs for a
single domain is not something most user agents presently support (Firefox
and Chrome come to mind). So while WebID by leveraging SSL client auth with
a single identity is something that most every modern browser supports, and
they will cache the (relatively expensive, computationally and network) TLS
client auth stage, maintaining parallel sessions to the same domain, with
distinct identities (smart card/eid and WebID) will most likely require
browser vendors to change their networking implementations in order to
support WebID. This is in addition to the WebID-specific provisions such as
..crt handling/specialized Accept headers that seem to be proposed here. I
would think that such requirements would prevent any widespread adoption of
WebID, because it will require browser vendors to adopt it in order to be
widely deployed, but browser vendors typically aren't likely to adopt
WebID-specific modifications unless/until it is widely deployed.

 

In order for WebID (or any really any Web-based authentication mechanism,
for that matter) to be used securely, the requests, including the initial
one [4] [5], need to happen over a secure connection (such as SSL). Once
that connection is established, then the requests need to continue to happen
over that security association if you're going to assume that identity
remains correct. That is, you can only assume the WebID user is "logged
in"/"authenticated" if/while every request originates over the HTTPS session
that the WebID was provided over.

 

If you're concerned about the desire to provide authn/authz via multiple
certificates, then it should be possible with TLS secure renegotiation [6].
Because each subsequent renegotiation is secured/protected by the previous
security establishment, a server could request multiple forms of
authentication by sending a HelloRequest, and in the new handshake,
requesting a different set of CAs in the CertificateRequest. Under such a
scenario, a user can prove their possession of a WebID private key in one
handshake and then, using that channel, prove their possession of a smart
card-based private key in a subsequent renegotiation handshake. While such a
scenario works at a TLS level, it will still likely require modifications to
user agents to fully support, as it requires careful thought about the user
experience, it has the benefit of accomplishing the same goal without being
WebID-specific.

 

Thanks,

Ryan

 

[1] http://www.w3.org/wiki/Foaf%2Bssl

[2]
https://www.blackhat.com/presentations/bh-usa-09/SOTIROV/BHUSA09-Sotirov-Att
ackExtSSL-PAPER.pdf

[3] http://en.wikipedia.org/wiki/Firesheep

[4] http://www.thoughtcrime.org/software/sslstrip/

[5] http://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security

[6] http://tools.ietf.org/html/rfc5746

 

From: public-xg-webid-request@w3.org [mailto:public-xg-webid-request@w3.org]
On Behalf Of peter williams
Sent: Saturday, February 26, 2011 8:21 PM
To: public-xg-webid@w3.org
Subject: issue of initiating client auth for parallel SSL sessionids

 

Because of the history of FOAF+SSL, we tend to see demos in which folks goto
a site with http, and then use a login button - guarding a protected region
of the site (or protected modes).

 

I think we need something more general.

 

As one browsers page index.html, should there by a file X referenced (call
it .crt), let the browser connect to its server using https (for that file
GET, only). Presumaly, if browser knows the mime type of .crt, it populates
the accept header with something suitable.

 

What I want is that the validation agent only kick off when it receives a
particular accept header ( induced by a containing page reference that
forced population of that accept header on the resource retrieval attempt).

 

Webid protocol would then run (and setup an SSL sessionid), but https would
not be protecting the connections to other page elements. As one moves
through a site, the SSL sessionid (due to webid protocol) can still guard
access using an authorization logic.

 

What this allows is both classical client authn (using smartcards, in DOD
land) and webid client authn. Now, it easy for the site to maintain 2
distinct SSL sessions, 1 with CA's controlling the selection of certs (which
hits the smartcard/eID) and 1 which does leverages webid.

 

Those SSL connections on the same site supervised by the smartcard/eID SSL
sessionid obviously leverage smartcard/eID's crypto,  doing SSL connections
that offer channel encryption using the *assured* crypto of the card (and
applying CA-based certg chaining authn .merely to protect the channel's
encryption SA). 

 

Those SSL connections on the same site supervised by the webid SSL sessionid
are distinct, influencing "login" authentication and "web sessions" -
driving an authorization engine (perhaps based on federated social network
conceptions)

 

 

 

Received on Sunday, 27 February 2011 20:29:00 UTC