Re: Reiterating the Key Ownership/SOP Issue

On 2014-01-15 16:09, Jeffrey Walton wrote:
> On Wed, Jan 15, 2014 at 4:48 AM, Anders Rundgren
> <anders.rundgren.net@gmail.com> wrote:
>> Trying to shed some light on this thorny issue...
>>
>> Using HTTPS with CCA (Client Certificate Authentication) you can indeed [in theory]
>> authenticate to _any_ site on the Internet.  So why isn't that considered a problem?
>> Because the authentication is performed through _trusted_browser_code_ which also
> 
> Are they the same browsers that claim plain text HTTP is good, but
> HTTPS using opportunistic encryption via self signed certificates are
> bad? Or the same browsers that consider RC4 "High Military Grade?" Or
> the same browsers that don't provide TLS above 1.0?

I'm only discussing this from an architecture perspective.

> 
>> involves the user in the final decision.  Issuers know this and trusts the code and
>> (maybe) users for doing the right thing.

> Well, in an ideal world, the user would not have to make a decision
> because many will make the wrong decision. Peter Gutmann's Engineering
> Security spends a few chapters discussing how and why a user will make
> a bad decision.

Right, users do bad choices, this is about limiting the damages of a bad choice.

> 
>> Using SOP exceptions however, certificates and keys would be available to any site on
>> the Internet (possibly also without user interaction).

> I disagree. At minimum, it violates principal of least privilege
> because some unknown external party could be granted access to
> something they should not have access to.

I'm merely describing what others have proposed and why it is a bad idea.


>> No serious issuers accept that
>> "their" credentials are directly accessed by arbitrary code on the Internet.

> What attacks follow after the user goes through the client
> authentication gyrations? Perhaps they ask the user to enter his or
> her username/password to verify the account?

Since WebCrypto doesn't come with a GUI (try it if you don't believe me)
arbitrary code could impersonate a user completely without the user knowing it.


>> It would be
>> like banks sanctioning payment card usage in "fake" payment terminals.  That's why some
>> issuers turned to plugins: _to_be_sure_keys_are_only_accessed_by_known_and_trusted_code_.

> *Cough*

It's easy to dismiss such ideas but you are forgetting that the industry as
represented by MSFT, GOOG and APPL haven't made secure credit-card payments
a reality on the web although they have had ample of time fixing it.


> 
>> So there is as (I see it...) not really a "Key Ownership" issue,

> You have one of two owners (I believe). First is the user if he or she
> provisioned the key. Second is the other party (for lack of a better
> term) if they provisioned the key (either locally with the owner's
> permission or remotely and installed it).
> 
> Who gets to authorize usage? What if the user approved an external
> party to use the identity he generated and the external party quietly
> shares your identity information with a partner? Or what if the user
> authorized another site to re-use the Bank of England identity?
> 
> Who gets to revoke those keys? If the user provisioned it, is it only
> the user? If the Bank of England provisioned it, can both the user or
> the bank cancel it in an attempt to reduce fraud? Or is the bank the
> only party allowed to revoke the key. How do you even handle
> revocation?
> 
> The real issues will be the legal ones, and they probably won't be
> answered. Expect it to be a free-for-all with the users and relying
> parties losing.


Although interesting, I don't have any comments to this as well as to the
lines below.  You seem to dislike PKI and there's nothing I can do about
that.  Your only option is probably to opt-out.  Maybe BitCoins is your thing?

Anders

> 
>> but a genuine security problem.

> Yes. For example, you *never* apply your secret or knowledge without
> authenticating the remote service.
> 
> * A key provisioned by you, gets used an external party.
> * A key provisioned by Bank of America, gets used an external party.
> 
> Even if the external party does not get to see the private key, they
> still get to see the effects of applying your secret to the challenge
> (or whatever mechanism is in place). Perhaps its an ECDSA with weak
> k's so the private key is recovered.
> 
> And here's another adverse scenario:
> 
> * The external party gets to track you by just gaining knowledge of
> the public key part.
> 
> These are better than cookies because they will survive browser cache
> wipes and advertiser UUID resets.
> 
> And if there's identifying information in the certificate (like issued
> by Bank of America), then the attacker gains additional knowledge for
> a spear phishing attack.
> 
>> Addressing this through smart GUIs, is IMHO not really useful
>> because _it_still_enables_naive_users_exposing_their_keys_to_arbitrary_web_code_,
>> not to mention the fact that certificate selection becomes quite awkward and
>> error-prone in the [likely] case you have more than certificate.

> Yes.
> 
>> The X.509 domain indicator extension which Samuel Erdtman suggested would limit
>> key access to a _single_site_ (or a set of sites) which IMO could actually work,
>> but this concept has to date not received any support.

> Not surprising. They want to run fast and loose.
> 
> Jeff
> 

Received on Wednesday, 15 January 2014 15:53:41 UTC