W3C home > Mailing lists > Public > public-webcrypto@w3.org > November 2012

Re: Unique identifiers and WebCrypto

From: Mark Watson <watsonm@netflix.com>
Date: Thu, 8 Nov 2012 19:27:14 +0000
To: Thomas Hardjono <hardjono@MIT.EDU>
CC: Seetharama Rao Durbha <S.Durbha@cablelabs.com>, "public-webcrypto@w3.org Group" <public-webcrypto@w3.org>
Message-ID: <319370FB-3B6D-44F1-90C3-4B480CEE0A8B@netflix.com>

On Nov 8, 2012, at 9:44 AM, Thomas Hardjono wrote:

>> From: Mark Watson [mailto:watsonm@netflix.com]
>> Sent: Wednesday, November 07, 2012 12:07 PM
>> To: Seetharama Rao Durbha
>> Cc: public-web-and-tv@w3.org WG; public-webcrypto@w3.org Group
>> Subject: Re: Unique identifiers and WebCrypto
>> On Nov 5, 2012, at 2:20 PM, Seetharama Rao Durbha wrote:
>> Mark
>> I certainly agree that support for pre-provisioned keys
>> (symmetric/otherwise) is a large-part of the expectations from
> people
>> interested in delivering content to devices.
>> However, the user authorization of keys that come embedded in the
>> device (at manufacturing time or delivered through a trusted means -
>> out of scope) should be handled in a user-friendly way. Best case
>> scenario is, the user is NOT asked for permission on such devices
>> (user has no clue what a 'key' is and what this identifier is, and
>> what they are authorizing). Rather the browser can rely on a
>> configuration file present on the device.
>> Seetharama,
>> In a W3C context, from a basic privacy principles, users should be
>> aware when a service is able to track their device. We should make
> it
>> clear in the specification when we include features that could
>> facilitate that (in fact I think it's important that we are very
>> transparent and up-front about that).
>> We don't, in W3C specifications, define *how* users have made so
>> aware, either for browsers or devices. It is always up to the
> browser
>> developer, device manufacturer and service to do that in a
>> user-friendly way.
>> When we are talking about pre-provisioned keys on hardware tokens
>> accessed from other platforms, we need to be clear about a
>> chicken-n-egg situation for permissions. The web application cannot
>> ask for a permission to access a key that it does not know exists.
> So,
>> the browsers need to let the web app know that certain keys exist
> with
>> the user (though they cannot perform any operation on them). So, the
>> wording around how browsers let the web apps know of the existence
> of
>> a key should be made clear.
>> For keys on tokens, like smart-cards, etc. there is indeed a key
>> discovery problem, which we don't yet have a detailed proposal for.
>> There would need to be some way for the web application to ask for a
>> list of keys with certain properties. User permission would be asked
>> for before exposing the list to the application and if user
> permission
>> is denied this should be indistinguishable from the key not existing
>> (as far as the web application is concerned).
>> For pre-provisioned origin-specific keys of the type I am most
>> concerned about, we would expect these to have well-known
> identifiers
>> (at least, well-known to the origin in question) and so the
> discovery
>> problem is solved that way.
>> .Mark
> Hi Mark & Seetharama,
> Apologies for jumping in late, and apologies if this is tangential. 
> I absolutely understand the "high-value content delivery" use-case
> that many content delivery networks/distributors and studios wish to
> address. Many folks equate DRM with "user-tracking", though I believe
> these are distinct use-cases. Many studios wish to distribute new
> movies directly to the home devices attached to HD TVs (nb. makes
> sense when revenue from movie theaters are declining). So a strong
> binding (ie. crypto-binding) needs to be established between the
> end-point device and the delivery content/channel.
> With regards to privacy, the Trusted Computing Group (TCG) built-in a
> feature called DAA (Direct Anonymous Attestation) within the TPM chip
> (TPM v1.2 onwards). This was in-part to answer the demands from some
> European governments who had concerns about privacy issues around the
> TPM.
> The DAA is a zero-knowledge protocol.  After the user enables *and*
> activates the TPM (yes, a 2 step user-authorization process where the
> user has to be physically in front of the PC/device), the TPM uses one
> of its manufacturer pre-provisioned certs (the AIK cert) for the DAA
> protocol handshake against the Privacy Certificate Authority (PCA).
> The PCA is the "trusted entity" which is trusted not to reveal the
> various fields within the X509 manufacturer pre-provisioned cert.
> The result is a kind of "blinded" certificate that the PC/device can
> then use with entities on the Internet. The matching private-key
> resides in the TPM, and the user will need to supply an
> authorization/unlocking password to use it. One TPM can be associated
> with many (unlimited number of) DAA certificates. So if the user feels
> he/she needs one, just delete it and get a new one.
> I'm not sure if "privacy" is in-scope for this WG. If it is, then
> nothing short of a zero-knowledge protocol (ie. one whose design can
> be validated by crypto experts) will be acceptable by the privacy
> advocates and broader privacy community. (That's my current opinion
> having been involved recently with the privacy wg in NSTIC).

My objective with the feature in question here is that the privacy implications be no worse than (and hopefully better than) cookies and web storage. One aspect in which the situation is better is that users have very little idea what a site will use cookies and web storage for when they give permission. Giving a site permission to access an (origin-specific) device identifier is arguably easier to understand.

There is certainly some wider discussion (outside the WG), required to achieve the above objective.


> Thanks.
> /thomas/
Received on Thursday, 8 November 2012 19:27:43 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:17:14 UTC