RE: issue of initiating client auth for parallel SSL sessionids

By the MITM proxies, am I correct in assuming you refer to WebID-ISSUE-28
and the related discussion of transparent proxies

 

I may have missed it, but has there been any discussion about proxy
certificates? If market deployment concerns are put aside, and accepting
that transparent proxies already cause a world of hurt (look at the IETF
KeyAssure/DANE/CAA discussions), proxy certificates are/were one approach to
delegate credentials to an agent acting on a user's behalf, potentially with
restrictions, for use with PKI and, in this case, TLS. While sounding
special/complex, they're just X.509 certificates that are signed by the
private key of the end-user's certificate, but issued to a private key of
the proxies chosing. The proxy then does not need access to the user's
private key, and relying parties (PKIX term, equivalent to Validating Agents
in this context) can build a chain from the proxy to the user, and from the
user to the trust anchor (if needed).

 

However, that's defining a solution for a problem that is more fundamental.
SSL MITM's are evil in many ways, and the growing view of browser vendors
and protocol implementors is that they are "breaking" the protocol, and
rightly so. TLS client auth is a prime example of where the proxies do not
help, so I would agree that MITM proxies are going to be a concern for a TLS
client-auth based protocol. That said, if you want to further build on the
security assurances, beyond just client auth, then RFC 5929 is where to
start, as it defines how to extract connection-unique and endpoint-unique
bindings. As documented there, the connection-unique channel bindings
prohibit having a proxy/intermediary, so some form of zero-knowledge proof
built on those would explicitly prohibit MITM proxies.

 

Regarding the rest of your examples, I don't see anything that really is
particularly concerning to WebID, beyond that above. The only concern is if
a single "server" (host, port) needs to protect different URIs with
different credentials (that is, some with a WebID, some with a smart card),
yet also be able to relate those credential selections (that is, the
smartcard user is-also the webid user). That requires more care in
implementation, but is not something explicitly prohibited by the WebID
protocol as it stands, I don't believe.

 

Ryan

 

 

From: peter williams [mailto:home_pw@msn.com] 
Sent: Sunday, February 27, 2011 4:34 PM
To: 'Ryan Sleevi'; public-xg-webid@w3.org
Subject: RE: issue of initiating client auth for parallel SSL sessionids

 

Now, I've argued for a week or more that we have in webid protocol land
major issues with SSL MITM proxies, outgoing and browsers and reverse
proxying agents fronting other agents and ultimately resource servers. I'm
really not sure anyone agreed with me that the topic is even a legit issue.
(The counter-arguments I personally found rather specious from a security
perspective, but quite proper from a webby perspective that focuses on "just
get going; improve later".)

 

You seem to recognizing that even if we had perfect communication channels,
the nature of the document-centric web is that it's an inherently
threatening environment (with malicious code, all over). In some cases, the
threats can be controlled by compartmentation (e.g. the like of live.com
enforced reputation services that simple censored and prevented me the other
day downloading some [political] file that Microsoft IE9 deigned to be
"undesirable", that I had to use Opera to get!); or best practices on web
site design - a self-discipline used by site designers. Having https act as
a singular tunnel to only 1 site that fronts others (and one disallows https
mashups at the browser) is one example of a self-discipline.

 

Now, I argue that we should have an issue that looks at SSL MITM issues, and
that we might want to consider "channel binding
<http://yorkporc.wordpress.com/2010/03/20/digest-cnonce-nonce-count-and-the-
channel-binding-directive-value/> " countermeasures such leveraging
SSL+webauth-header (digest) cooperation, which allows one to at least detect
MITMing SSL proxies in modern https "networks".

 

I'll guess you would argue that we should also have an issue that
essentially mandates UI and site-design guidelines (much as does the openid
world!) that only certain website design practices are approved - so as to
wrap a security safety blanket around the "approved" world.

 

Now, I don't have much opinion on the latter as a protocol designer (it's
out of scope). As a system designer, it is in scope certainly. But, I'm new
to W3C land, and get the impression that system deployment constraints (even
well motivated ones) are not something W3C wants to be in the business of
endorsing. Rather, it wants a "few-constraints as possible" world (which
maximizes takeoff, recognizing that it may well be CREATING the likes of the
phishing problem thereby); and then expects the market to decide how to
profile the resulting wave so its sustainable. That profiling may well
impose considerable conservatism on "deployments", particularly as
corporations come on board as buyers, and apply intranet/extranet overlays
on the web that attempt to meet corporate integrity requirements.

 

I hope you recognize that, given my examples, what Im trying to do is

 

-          Introduce the notion that BY DEFAULT there is a world of "mixed
browsing" - meaning some links of a given graph (e.g. an HTML doc) are http,
some are https, and there may be multiple https endpoints involved in one
graph. This world is bad when handling HTML graphs (human users), and even
worse for data graphs (machine users).

 

-          In a multiple endpoint world, the browser may be faced with
competing demands for SSL client authn - since the https endpoints are - in
the general case - not coordinated. In one graph, multiple parties may
demand client certs and do cert pingbacks against foaf  cards. As designers,
one cannot assume correct and/or coordinated implementation by arbitary
sites, surely.

 

-          Wanting to perhaps allow both high assurance crypto (on
smartcards and eID cards) to co-exist with low-assurance crypto delivered by
the browser cryptomodule on a PC - recognizing that, generally, a browser is
merely a gateway to multiple crypto modules, selected by the SSL server's
choice of CA msg I a given handshake run. Perhaps, the ideal webid protocol
interaction with SSL might be is, as you seem to say, be one in which its a
"custom record-layer protocol," delivered on the record layer controlled by
an outer SSL tunnel (whose confidentiality SA might be orchestrated by the
eID card).

 

There are lots of topics would flow from those issues (would one want to
give the e-ID world a black check to decide if webid protocol can even
flow)? Should UI mashups of arbitrary https endpoints be banned? Could the
"mashups" of RDF graphs (vs HTML documents) with arbitrary and multiple
https site endpoints even be banned (without interfering with the whole
concept of the "link-anywhere" semweb)?

 

 

 

 

From: public-xg-webid-request@w3.org [mailto:public-xg-webid-request@w3.org]
On Behalf Of Ryan Sleevi
Sent: Saturday, February 26, 2011 6:49 PM
To: 'peter williams'; public-xg-webid@w3.org
Subject: RE: issue of initiating client auth for parallel SSL sessionids

 

Hi Peter,

 

It may help me to understand what you're proposing if you could describe the
request flow using the HTTP semantics. I'm having a bit of trouble
understanding your proposal, and that's making it hard to evaluate the
security implications. Something like the simple sequence diagram at [1]
would help greatly.

 

My concern is that you're proposing that a user agent perform the WebID auth
sequence over HTTPS/SSL, but then continue the browsing session through
unsecured HTTP. This seems to defeat any guarantee of secure user
authentication, which is why I'm wanting to make sure I've understood
correctly.

 

Two example attacks that would make such a proposal untenable are the
injection of malicious scripts [2] or session hijacking [3]. The requests
received over HTTP cannot be assured of the WebID accessing them, since the
connection may be MITMed, and likewise, requests received over HTTPS may
have been initiated by malicious script running downloaded via HTTP.

 

Further, the idea of maintaining two independent SSL session IDs for a
single domain is not something most user agents presently support (Firefox
and Chrome come to mind). So while WebID by leveraging SSL client auth with
a single identity is something that most every modern browser supports, and
they will cache the (relatively expensive, computationally and network) TLS
client auth stage, maintaining parallel sessions to the same domain, with
distinct identities (smart card/eid and WebID) will most likely require
browser vendors to change their networking implementations in order to
support WebID. This is in addition to the WebID-specific provisions such as
..crt handling/specialized Accept headers that seem to be proposed here. I
would think that such requirements would prevent any widespread adoption of
WebID, because it will require browser vendors to adopt it in order to be
widely deployed, but browser vendors typically aren't likely to adopt
WebID-specific modifications unless/until it is widely deployed.

 

In order for WebID (or any really any Web-based authentication mechanism,
for that matter) to be used securely, the requests, including the initial
one [4] [5], need to happen over a secure connection (such as SSL). Once
that connection is established, then the requests need to continue to happen
over that security association if you're going to assume that identity
remains correct. That is, you can only assume the WebID user is "logged
in"/"authenticated" if/while every request originates over the HTTPS session
that the WebID was provided over.

 

If you're concerned about the desire to provide authn/authz via multiple
certificates, then it should be possible with TLS secure renegotiation [6].
Because each subsequent renegotiation is secured/protected by the previous
security establishment, a server could request multiple forms of
authentication by sending a HelloRequest, and in the new handshake,
requesting a different set of CAs in the CertificateRequest. Under such a
scenario, a user can prove their possession of a WebID private key in one
handshake and then, using that channel, prove their possession of a smart
card-based private key in a subsequent renegotiation handshake. While such a
scenario works at a TLS level, it will still likely require modifications to
user agents to fully support, as it requires careful thought about the user
experience, it has the benefit of accomplishing the same goal without being
WebID-specific.

 

Thanks,

Ryan

 

[1] http://www.w3.org/wiki/Foaf%2Bssl

[2]
https://www.blackhat.com/presentations/bh-usa-09/SOTIROV/BHUSA09-Sotirov-Att
ackExtSSL-PAPER.pdf

[3] http://en.wikipedia.org/wiki/Firesheep

[4] http://www.thoughtcrime.org/software/sslstrip/

[5] http://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security

[6] http://tools.ietf.org/html/rfc5746

 

From: public-xg-webid-request@w3.org [mailto:public-xg-webid-request@w3.org]
On Behalf Of peter williams
Sent: Saturday, February 26, 2011 8:21 PM
To: public-xg-webid@w3.org
Subject: issue of initiating client auth for parallel SSL sessionids

 

Because of the history of FOAF+SSL, we tend to see demos in which folks goto
a site with http, and then use a login button - guarding a protected region
of the site (or protected modes).

 

I think we need something more general.

 

As one browsers page index.html, should there by a file X referenced (call
it ..crt), let the browser connect to its server using https (for that file
GET, only). Presumaly, if browser knows the mime type of .crt, it populates
the accept header with something suitable.

 

What I want is that the validation agent only kick off when it receives a
particular accept header ( induced by a containing page reference that
forced population of that accept header on the resource retrieval attempt).

 

Webid protocol would then run (and setup an SSL sessionid), but https would
not be protecting the connections to other page elements. As one moves
through a site, the SSL sessionid (due to webid protocol) can still guard
access using an authorization logic.

 

What this allows is both classical client authn (using smartcards, in DOD
land) and webid client authn. Now, it easy for the site to maintain 2
distinct SSL sessions, 1 with CA's controlling the selection of certs (which
hits the smartcard/eID) and 1 which does leverages webid.

 

Those SSL connections on the same site supervised by the smartcard/eID SSL
sessionid obviously leverage smartcard/eID's crypto,  doing SSL connections
that offer channel encryption using the *assured* crypto of the card (and
applying CA-based certg chaining authn .merely to protect the channel's
encryption SA). 

 

Those SSL connections on the same site supervised by the webid SSL sessionid
are distinct, influencing "login" authentication and "web sessions" -
driving an authorization engine (perhaps based on federated social network
conceptions)

 

 

 

Received on Sunday, 27 February 2011 22:11:46 UTC