W3C home > Mailing lists > Public > public-webcrypto@w3.org > November 2012

Re: Unique identifiers and WebCrypto

From: Seetharama Rao Durbha <S.Durbha@cablelabs.com>
Date: Sat, 10 Nov 2012 21:07:39 -0700
To: Mark Watson <watsonm@netflix.com>
CC: Thomas Hardjono <hardjono@mit.edu>, Wan-Teh Chang <wtc@google.com>, "public-webcrypto@w3.org Group" <public-webcrypto@w3.org>
Message-ID: <CCC47193.814A%s.durbha@cablelabs.com>
On 11/9/12 4:29 PM, "Mark Watson" <watsonm@netflix.com<mailto:watsonm@netflix.com>> wrote:


On Nov 9, 2012, at 3:14 PM, Seetharama Rao Durbha wrote:

On 11/9/12 3:01 PM, "Mark Watson" <watsonm@netflix.com<mailto:watsonm@netflix.com>> wrote:


On Nov 9, 2012, at 1:36 PM, Seetharama Rao Durbha wrote:

Thomas
I think it is not a question of privacy, but the issue is (in my mind) one of providing access to keys based on SOP that can be defeated by MITM. That is a legitimate concern, and I previously proposed that we mandate TLS loaded scripts that need access to key storage.

http://cablelabs.com<http://cablelabs.com/> and https://cablelabs.com<https://cablelabs.com/> are different origins and so would have different keys. You might not provision a key for the http:// origin at all.

And I think browser implementations should not allow key storage access at all to http://cablelabs.com<http://cablelabs.com/> – as it is not loaded using HTTPS.


When you see something signed with the https://cablelabs.com<https://cablelabs.com/> key you know it was sent by code from the UA-verified origin https://cablelabs.com<https://cablelabs.com/>. You don't know, though, what the root of trust was for that verification (i.e. root certificate) or indeed how reliable the UA's verification is (unless you know something else about the UA).

ekr's proposal for a signedByOrigin method, with the certificate chain, would tell you the root of trust that the UA claims to have used.

At some point, you have to trust the browser – who will be displaying the details from the cert chain for the user to inspect? It is the browser – so you are already trusting the browser to correctly and truthfully display the cert chain, so why not trust the browser to verify the server cert? We may require EV certs, however. As I mentioned earlier, users are already trusting browsers with a lot of data.

The user trusting the browser and the service provider trusting the browser are different things.

When we have a pre-provisioned key, this often tells the service provider something about the device it was pre-provisioned into. For example if that key was originally put into a closed-platform TV with only one browser, then maybe the service provider can trust the browser more than if it was a generic linux box on which any browser could run.

Let us not confuse the situation here by bringing in implementation factors that will influence server treatment of keys. Our focus here is how (and to which app) the browser provides access to keys on the client side.


[Note that I use phrases like "trust" and "more" only for ease of understanding - what I mean is that the service provider can distinguish between scenarios with different security properties and make authorization decisions accordingly].

…Mark


--Seetharama


…Mark


That will be a good discussion to have.

Thanks,
Seetharama

On 11/9/12 11:20 AM, "Thomas Hardjono" <hardjono@mit.edu<mailto:hardjono@mit.edu>> wrote:


Hi Seetharama,

My apologies for partially starting this privacy thread.

For the sake of clarity, if say this WebCrypto specification does NOT
include KeyStorage (or any key storing capability), would the WG be
confident that none of the APIs can be used/abused to "violate user
privacy" (I use the quotes because of the broad interpretations of
privacy).

I ask because I'm almost sure this would be one of the questions posed
upon the publication of this spec.

Thanks and apologies again.

/thomas/


__________________________________________

From: Seetharama Rao Durbha [mailto:S.Durbha@cablelabs.com]
Sent: Thursday, November 08, 2012 5:39 PM
To: Thomas Hardjono; Mark Watson; Wan-Teh Chang
Cc: public-webcrypto@w3.org<mailto:public-webcrypto@w3.org> Group
Subject: Re: Unique identifiers and WebCrypto

I, again, feel that privacy is being brought into the conversation of
pre-provisioned keys in an unrelated way.

Recognize that, a single device may come with number of different
applications, each with their own pre-provisioned key. A blu-ray
player can come with a Netflix app, as well as an Amazon app ? with
totally different keys. When we talk about authorization, we are
talking about user authorizing the Netflix app to access its key, and
Amazon app to access its own key. These keys have nothing to do with
the device identifier.

These keys are not the same as TPM cert, or UID of Apple devices ?
which are unique per device.

I do not understand how this becomes privacy-related. Recognize that
the service accessed by the user already has so many avenues to
collect data on them ? they know how many simultaneous streams you
have, from which locations (by IP address), viewing history, your
preferences, and heck your credit card, address, phone number, and so
on. Why are we talking about keys as somehow opening up user's
treasure chest?

On 11/8/12 12:59 PM, "Thomas Hardjono" <hardjono@mit.edu<mailto:hardjono@mit.edu>> wrote:


-----Original Message-----
From: Mark Watson [mailto:watsonm@netflix.com]
Sent: Thursday, November 08, 2012 2:47 PM
To: Wan-Teh Chang
Cc: Thomas Hardjono; Seetharama Rao Durbha; public-webcrypto@w3.org<mailto:public-webcrypto@w3.org>
Group
Subject: Re: Unique identifiers and WebCrypto
On Nov 8, 2012, at 11:34 AM, Wan-Teh Chang wrote:
On Thu, Nov 8, 2012 at 11:27 AM, Mark Watson <watsonm@netflix.com<mailto:watsonm@netflix.com>>
wrote:

My objective with the feature in question here is that the
privacy
implications be no worse than (and hopefully better than) cookies
and
web storage. One aspect in which the situation is better is that
users have very little idea what a site will use cookies and web
storage for when they give permission. Giving a site permission
to
access an (origin-specific) device identifier is arguably easier
to
understand.

If I understand it correctly, the perceived problem with an
origin-specific device identifier is that it is "read only" and
cannot
be deleted by the user.
Well, UAs may choose to allow users to delete the identifier. From
the
site's point of view that's indistinguishable anyway from the site
not
being authorized by the user to see it. The issue is that if you
delete
such an identifier, services that need it may not work any more and
users need to be warned about that. On a TV this would be a
"permanently disable service X" button. Personally I would happily
use
that feature on certain TV channels ;-)

On the other hand, the user can effectively change the device
identifier by getting a new device,
Depending on device implementation, it may be able to change its
device
identifier at user request.
whereas an (origin-specific) user identifier, such as my Yahoo
Mail
account and Amazon.com<http://Amazon.com/> account, usually last much longer than the
lifetime of a device. So it's not clear to me if a device
identifier
has more serious privacy issues.

Wan-Teh

I may be way off, but isn't this precisely the challenge of
privacy-preserving identity:
(a) how a user-selected identifier can be bound (unbound) by the user
to a service-issued identifier;
(b) how the user can select a new identifier and re-bound it to an old
service-issued identifier.
(c) how to do (a) and (b) with the assurance that neither the UA nor
the service is keeping track of the bindings.


/thomas/
Received on Sunday, 11 November 2012 04:08:08 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:17:14 UTC