RE: issue of initiating client auth for parallel SSL sessionids

See RFC 3820, X.509 Proxy Certificate Profile [1]. No overloaded term, first
result in the Big 3 search engines. The impact of proxy certificates, if
used for a MITM SSL proxy, is that it puts the onus of
validating/understanding proxy certificates onto the relying party
(Validation Agent), rather than on the proxy. They only work for sites which
are configured to accept them (as part of client certificate processing).
This may or may not be acceptable for the protocol at large, but it shows
how one might deal with the problem. However, it's not a solution I'm
necessarily advocating as a good solution, but given the concern for MITM
proxies and the (scary) idea of storing the WebID private key on the proxy
itself, I was wondering it had been broached yet. A nice further read about
them is at [2].

 

I'm not sure what you meant by ephemeral certificates - and I'm especially
confused by what you mean by "in SSL ciphersuites whose cipher nature
exploits them". Are you talking about the ephemeral cipher suites, such as
those that offer perfect forward secrecy by negotiating an ephemeral
Diffie-Hellman key and authenticating said ephemeral key via
(RSA/DSA/ECDSA)? If so, then it has little to do with the certificate, just
the cipher suite selection, and I certainly can't see how that relates at
all to WebID or its needs. Were you talking about issuing short-lived
certificates from some long-lived private key? If so, then the Proxy
Certificate Profile is designed for just that - no (homegrown) certificates
needed.

 

As for why "most folks" are pretty set in using RSA - compatibility in
deployment. As for RC4, at least for the large sites, it offers a balance
between "good enough" security and optimized network experience. RC4 is a
stream cipher, not a block cipher, so there is no additional padding in the
TLS records. For sites that aren't "super s3kr3t", whose use of HTTPS is to
prevent attacks of opportunity/wifi sniffing, the padding of TLS records
using block ciphers (like AES) can have a noticeable impact on the
responsiveness of the site, for security assurances that aren't necessarily
needed. And for the smaller sites, it's just because SSL is hard enough for
them to understand, and secure deployment is asking a lot - the same problem
that browser vendors see every day when designing security interfaces

 

All that said, if the WebID protocol continues to use TLS client
authentication, then it must be expected/known that transparent SSL proxies
won't work. The advice from vendors of such products (such as Bluecoat,
Microsoft's Forefront TMG, etc) are: If you need to perform TLS client auth,
add the site to the exclusion list of sites that are not transparently
filtered [3]. This is because such transparent proxies are knowingly
"breaking" the protocol, and client auth is one area that they're especially
broken.

 

If the WebID protocol needs to work through such (malicious) proxies without
requiring the proxies to be modified, which seems implied if WebID is meant
to be cheaply deployed widely, the options I see are:

1) Don't use TLS client authentication. Use some other means independent of
TLS for identification, although presumably still securing the entire
request/response with TLS.

2) Work with the vendors to define some new protocol for allowing
semi-transparent TLS interception while performing client auth. Good luck
with that.

 

Hope that helps,

 

[1] http://www.ietf.org/rfc/rfc3820.txt

[2] http://security.ncsa.illinois.edu/research/wssec/gsihttps/

[3]
http://blogs.technet.com/b/isablog/archive/2009/10/19/common-problems-while-
implementing-https-inspection-on-forefront-tmg-2010-rc.aspx

 

From: peter williams [mailto:home_pw@msn.com] 
Sent: Sunday, February 27, 2011 6:38 PM
To: 'Ryan Sleevi'; public-xg-webid@w3.org
Subject: RE: issue of initiating client auth for parallel SSL sessionids

 

My advice is explain proxy certs.

 

Ive tried to introduce ephemeral certs (in SSL ciphersuites whose cipher
nature exploits them). But, most folks are pretty set in their thinking in
doing 1990s era https, with just classical RSA and RC4 stream ciphering.

 

And, I've tried hard to introduce SSL MITM proxies (client side, or reverse)
as a threat posed - to "just" the secure communications aspects of webid
protocol (never mind caching, or interfererence, etc)

 

TBH, I don't know what you mean by proxy certs, since the term "proxy" is so
overloaded.

 

I spent the last hour or two making "proxy certs" in gnutls, which seemed to
be about some old experiments in delegation and computable/composable policy
expressions stuffed in a cert extension. This seems to align with your text.
If so, No - its not been a topic of discussion. 

 

We have touched on the topic of having "javascript" in a cert extension
(rather than some policy language), and we have touched on dumping
X.509/ASN1/DER/PKIX and just using json-signed/encoded datums instead

 

But, I think there is some receptivity to saying: webid might leverage
signed json/javascript certs should they exist (since they are "so webby").
But, they don't really exist yet. The history of the movement is tied to the
goal of working with actual browsers, from the last 5 years (which ties one
to X.509). If signed javascript/json came fast, I think it might be a
different group.

 

Received on Monday, 28 February 2011 00:40:13 UTC