RE: issue of initiating client auth for parallel SSL sessionids

I have not released my (failed) phd dissertation on "UCI"
cross-certification - which combined what we would now call the openid
thesis (check user has write access on a file at id I), write access to a
(directory) entry, and peering of entities in different namespaces and
security domains issuing cross-certificate to each other (e.g. UK and
Germany, which was the actual test case, in about 1990).

 

But, ignoring what doesn't exist on the web until I bother to scan my own
old crap, when I read  RFC 3820 I see how it tries to carve out a space  -
in which it attempting to argue: an EE is not violating the CA's ban on an
EE issuing certs when it (the EE) issues a proxy-cert. I see how its trying
to be OAUTH (as we would call it, today).

 

(I'm getting just an inkling of a really old memory as I write, that perhaps
I have read the proxy cert RFC before now, now I see the ANL author refs,
etc).

 

Now I also recall losing a few million dollars personally in ValiCert (an
IPO, gone bad),  which promoted the OCSP model (third parties validates
certs, using some criteria). For some reason that I didn't agree with, folks
decided to go to IETF and seek PKIX WG ratification of the notion of an OCSP
responder would "speak for" a CA's repository of certs - when attesting to a
cert's current validity (think webid protocol, now!). Under great pressure,
the architect (who took it to IETF and the usual DARPA/NSA lot in PKIX)
agreed to let a n OCSP signer cite a CA "delegation/proxy" cert in its
signature cert path - a path that attested to the responders "right" to
speak for the CA's repository of certs (when leading/misleading Relying
parties about  cert validity).

 

Somehow, the OCSP standard finally issued as an RFC did not MANDATE actually
that such a rights-cert exist, "authorizing" the OCSP responder to exist and
then speak. A responder *could* exist alternatively, and legitimately, as an
independent channel (independent as in markov chain). But, in reality, IESG
was saying: that only those "authorized" to speak for the CA were "part" of
the internet PKI, as represented by a proxy/delegation cert. Ignore the
rest; dear citizen. If you are some weirdo case, perhaps accept an
independent signer as the trust anchor for such assertions (as used in a
half+ billion IE3.IE4.IE5.IE6 browsers, conveniently ignored).

 

As I read the proxy cert RFC, I'm reminded of all those disputes. They seem
very IETF, and thus not W3C.

 

 

 

 

 

 

 

 

 

 

From: public-xg-webid-request@w3.org [mailto:public-xg-webid-request@w3.org]
On Behalf Of peter williams
Sent: Sunday, February 27, 2011 5:07 PM
To: 'Ryan Sleevi'; public-xg-webid@w3.org
Subject: RE: issue of initiating client auth for parallel SSL sessionids

 

This is good. Whether common browser's and proxies actually do proxy certs
in reality is something you should help us with (I don't know, never having
touched them). Todate, we were very focused on simply doing better than
basic auth over https (and not being quite as demanding as openid/SAML).
This means using whatever the commodity browser actually does.  Remember the
goal: less passwords to remember, be slightly better than basic auth, don't
fall into the traps that openid community fell into.

 

I have a very simple model of ephemeral certs. Once, using American-sourced
software browsers, one was not allowed to do >512 bit RSA (if you were some
inherently, untrustworthy, foreigner (like me).) Though I live the the USA,
American folks could not technically give me a browser build capable of
doing rsa for key agreement if it's keying applied RSA mods > 512 bits. With
special dispensation form the state dept. - that I never got - software
firms might get their foreign employees actually writing the RSA code ( J )
special legal permission to see their own code.

 

Ok. That bizarre world of unlogic is no longer; like institutional racism is
no longer.

 

But, that era left us with an (unused) apparatus of control - that we might
apply for more positive  and logical purposes.

 

Let's recall, in the unlogical days pre 2000, that an RSA keypair/cert would
be minted by the server on the fly to address those export rules, and it
would be signed using the server's RSA key (duly supported by a
verisign-issued cert, that I used to see being minted, personally, for folks
like titties.com (actually a few 10s of thousand of variants),
whitehouse.gov, and the Vatican). The SSL handshake would then occur with a
(nasty evil foreign browser) at 512 bit keys, or less. Presumably, I was
being spied on more easily than otherwise.

 

When I look at all that today, I say: it was a nice enforcement system. It
was a classical assurance technique (enforcing a policy, based in
certs/encryption as a access control mechanism - as any student of formal
assurance regimes studies in the better govt. cipher school). So, how might
one apply it - for other control goals - one's that support the web (rather
than merely compartmentalize the world into Americans and "otherwise")?

 

My gut tells me that there is something here (and I don't know what it is).

 

Maybe it's the fact that OAUTH really successfully addressed the multi-site
app concept (see facebook apps and twitter concepts, in which a server can
speak for a client unto other servers), that makes me think: perhaps
gnutls-style (RFC 3820?) "proxy certs" when issued as ephemeral certs
bearing a cert extension in some web friendly language (JSON/javascript)
might be useful to our wider goals, here.

 

Remember , its not a WG. It's an incubator.

 

 

 

 

 

 

 

 

 

 

 

 

From: Ryan Sleevi [mailto:ryan@sleevi.com] On Behalf Of Ryan Sleevi
Sent: Sunday, February 27, 2011 4:38 PM
To: 'peter williams'; public-xg-webid@w3.org
Subject: RE: issue of initiating client auth for parallel SSL sessionids

 

See RFC 3820, X.509 Proxy Certificate Profile [1]. No overloaded term, first
result in the Big 3 search engines. The impact of proxy certificates, if
used for a MITM SSL proxy, is that it puts the onus of
validating/understanding proxy certificates onto the relying party
(Validation Agent), rather than on the proxy. They only work for sites which
are configured to accept them (as part of client certificate processing).
This may or may not be acceptable for the protocol at large, but it shows
how one might deal with the problem. However, it's not a solution I'm
necessarily advocating as a good solution, but given the concern for MITM
proxies and the (scary) idea of storing the WebID private key on the proxy
itself, I was wondering it had been broached yet. A nice further read about
them is at [2].

 

I'm not sure what you meant by ephemeral certificates - and I'm especially
confused by what you mean by "in SSL ciphersuites whose cipher nature
exploits them". Are you talking about the ephemeral cipher suites, such as
those that offer perfect forward secrecy by negotiating an ephemeral
Diffie-Hellman key and authenticating said ephemeral key via
(RSA/DSA/ECDSA)? If so, then it has little to do with the certificate, just
the cipher suite selection, and I certainly can't see how that relates at
all to WebID or its needs. Were you talking about issuing short-lived
certificates from some long-lived private key? If so, then the Proxy
Certificate Profile is designed for just that - no (homegrown) certificates
needed.

 

As for why "most folks" are pretty set in using RSA - compatibility in
deployment. As for RC4, at least for the large sites, it offers a balance
between "good enough" security and optimized network experience. RC4 is a
stream cipher, not a block cipher, so there is no additional padding in the
TLS records. For sites that aren't "super s3kr3t", whose use of HTTPS is to
prevent attacks of opportunity/wifi sniffing, the padding of TLS records
using block ciphers (like AES) can have a noticeable impact on the
responsiveness of the site, for security assurances that aren't necessarily
needed. And for the smaller sites, it's just because SSL is hard enough for
them to understand, and secure deployment is asking a lot - the same problem
that browser vendors see every day when designing security interfaces

 

All that said, if the WebID protocol continues to use TLS client
authentication, then it must be expected/known that transparent SSL proxies
won't work. The advice from vendors of such products (such as Bluecoat,
Microsoft's Forefront TMG, etc) are: If you need to perform TLS client auth,
add the site to the exclusion list of sites that are not transparently
filtered [3]. This is because such transparent proxies are knowingly
"breaking" the protocol, and client auth is one area that they're especially
broken.

 

If the WebID protocol needs to work through such (malicious) proxies without
requiring the proxies to be modified, which seems implied if WebID is meant
to be cheaply deployed widely, the options I see are:

1) Don't use TLS client authentication. Use some other means independent of
TLS for identification, although presumably still securing the entire
request/response with TLS.

2) Work with the vendors to define some new protocol for allowing
semi-transparent TLS interception while performing client auth. Good luck
with that.

 

Hope that helps,

 

[1] http://www.ietf.org/rfc/rfc3820.txt

[2] http://security.ncsa.illinois.edu/research/wssec/gsihttps/

[3]
http://blogs.technet.com/b/isablog/archive/2009/10/19/common-problems-while-
implementing-https-inspection-on-forefront-tmg-2010-rc.aspx

 

From: peter williams [mailto:home_pw@msn.com] 
Sent: Sunday, February 27, 2011 6:38 PM
To: 'Ryan Sleevi'; public-xg-webid@w3.org
Subject: RE: issue of initiating client auth for parallel SSL sessionids

 

My advice is explain proxy certs.

 

Ive tried to introduce ephemeral certs (in SSL ciphersuites whose cipher
nature exploits them). But, most folks are pretty set in their thinking in
doing 1990s era https, with just classical RSA and RC4 stream ciphering.

 

And, I've tried hard to introduce SSL MITM proxies (client side, or reverse)
as a threat posed - to "just" the secure communications aspects of webid
protocol (never mind caching, or interfererence, etc)

 

TBH, I don't know what you mean by proxy certs, since the term "proxy" is so
overloaded.

 

I spent the last hour or two making "proxy certs" in gnutls, which seemed to
be about some old experiments in delegation and computable/composable policy
expressions stuffed in a cert extension. This seems to align with your text.
If so, No - its not been a topic of discussion. 

 

We have touched on the topic of having "javascript" in a cert extension
(rather than some policy language), and we have touched on dumping
X.509/ASN1/DER/PKIX and just using json-signed/encoded datums instead

 

But, I think there is some receptivity to saying: webid might leverage
signed json/javascript certs should they exist (since they are "so webby").
But, they don't really exist yet. The history of the movement is tied to the
goal of working with actual browsers, from the last 5 years (which ties one
to X.509). If signed javascript/json came fast, I think it might be a
different group.

 

Received on Monday, 28 February 2011 02:11:28 UTC