W3C home > Mailing lists > Public > public-webcrypto@w3.org > November 2012

Re: Unique identifiers and WebCrypto

From: Mark Watson <watsonm@netflix.com>
Date: Fri, 9 Nov 2012 20:57:27 +0000
To: Thomas Hardjono <hardjono@MIT.EDU>
CC: Wan-Teh Chang <wtc@google.com>, Seetharama Rao Durbha <S.Durbha@cablelabs.com>, "public-webcrypto@w3.org Group" <public-webcrypto@w3.org>
Message-ID: <17C14CEA-3967-4B4B-845C-712F77FB9C50@netflix.com>

On Nov 9, 2012, at 11:22 AM, Thomas Hardjono wrote:

>> -----Original Message-----
>> From: Mark Watson [mailto:watsonm@netflix.com]
>> Sent: Friday, November 09, 2012 1:29 PM
>> To: Thomas Hardjono
>> Cc: Wan-Teh Chang; Seetharama Rao Durbha; public-webcrypto@w3.org
> Group
>> Subject: Re: Unique identifiers and WebCrypto
>> On Nov 9, 2012, at 10:14 AM, Thomas Hardjono wrote:
>>>> -----Original Message-----
>>>> From: Mark Watson [mailto:watsonm@netflix.com]
>>>> Sent: Thursday, November 08, 2012 3:23 PM
>>>> To: Thomas Hardjono
>>>> Cc: Wan-Teh Chang; Seetharama Rao Durbha; public-webcrypto@w3.org
>>>> Group
>>>> Subject: Re: Unique identifiers and WebCrypto
>>>> On Nov 8, 2012, at 11:59 AM, Thomas Hardjono wrote:
>>>>>> -----Original Message-----
>>>>>> From: Mark Watson [mailto:watsonm@netflix.com]
>>>>>> Sent: Thursday, November 08, 2012 2:47 PM
>>>>>> To: Wan-Teh Chang
>>>>>> Cc: Thomas Hardjono; Seetharama Rao Durbha; public-
>> webcrypto@w3.org
>>>>>> Group
>>>>>> Subject: Re: Unique identifiers and WebCrypto
>>>>>> On Nov 8, 2012, at 11:34 AM, Wan-Teh Chang wrote:
>>>>>>> On Thu, Nov 8, 2012 at 11:27 AM, Mark Watson
>> <watsonm@netflix.com>
>>>>>> wrote:
>>>>>>>> My objective with the feature in question here is that the
>>>>> privacy
>>>>>>>> implications be no worse than (and hopefully better than)
>> cookies
>>>>>> and
>>>>>>>> web storage. One aspect in which the situation is better is
> that
>>>>>>>> users have very little idea what a site will use cookies and
> web
>>>>>>>> storage for when they give permission. Giving a site
> permission
>>>>> to
>>>>>>>> access an (origin-specific) device identifier is arguably
> easier
>>>>> to
>>>>>>>> understand.
>>>>>>> If I understand it correctly, the perceived problem with an
>>>>>>> origin-specific device identifier is that it is "read only"
> and
>>>>>> cannot
>>>>>>> be deleted by the user.
>>>>>> Well, UAs may choose to allow users to delete the identifier.
> From
>>>>> the
>>>>>> site's point of view that's indistinguishable anyway from the
> site
>>>>> not
>>>>>> being authorized by the user to see it. The issue is that if
> you
>>>>> delete
>>>>>> such an identifier, services that need it may not work any more
>> and
>>>>>> users need to be warned about that. On a TV this would be a
>>>>>> "permanently disable service X" button. Personally I would
> happily
>>>>> use
>>>>>> that feature on certain TV channels ;-)
>>>>>>> On the other hand, the user can effectively change the device
>>>>>>> identifier by getting a new device,
>>>>>> Depending on device implementation, it may be able to change
> its
>>>>> device
>>>>>> identifier at user request.
>>>>>>> whereas an (origin-specific) user identifier, such as my Yahoo
>>>>> Mail
>>>>>>> account and Amazon.com account, usually last much longer than
> the
>>>>>>> lifetime of a device. So it's not clear to me if a device
>>>>> identifier
>>>>>>> has more serious privacy issues.
>>>>>>> Wan-Teh
>>>>> I may be way off, but isn't this precisely the challenge of
>>>>> privacy-preserving identity:
>>>>> (a) how a user-selected identifier can be bound (unbound) by the
>>>>> user to a service-issued identifier;
>>>>> (b) how the user can select a new identifier and re-bound it to
> an
>>>> old
>>>>> service-issued identifier.
>>>>> (c) how to do (a) and (b) with the assurance that neither the UA
>> nor
>>>>> the service is keeping track of the bindings.
>>>> Are you suggesting that all identifiers should have the above
>>>> properties ? Or just that we should make identifiers with these
>>>> properties available to users and services ?
>>>> If the former, how would you support a service which offered each
>>>> person a one-off one-month free trial ? How would you detect
> fraud ?
>>>> ...Mark
>>> Hi Mark,
>>> The above (a)-(c) is seen from the perspective of the end-user
> (one
>> who is assumed to be familiar with the notion of pseudonyms or
>> anonyms). So its only one piece of the bigger picture.
>>> I believe the high-level model that some privacy-advocates may be
>> open to, is the following (sorry its kinda long and rough):
>>> (1) I logon to the Identity Provider X (IdP-X) that I trust (eg.
>> whose
>>> legal trust framework I accept and who in-turn is willing to
> take-on
>>> liabilities :)
>>> (2) I obtain a pseudonym identity from IdP-X (say
>> JohnDoe[at]idpx.com) and a signed Assertion-X from IdP-X saying (i)
>> that John Doe is a true human being as vouched by IdP-X and (ii)
> that
>> he is over 18 yrs old.
>>> (3)  I use that pseudonym (or my real identity) to logon to a
> Payment
>> Provider Y (e.g. like a PayPal) and present it with Assertion-X.
>>> (4) I request the Payment Provider to issue an signed Assertion-Y
>> that John Doe (the subject stated in Assertion-X) is committed to
> pay
>> $7 per month to NetFlix for 1 year. If necessary the Payment
> Provider
>> can act as an payment escrow.
>>> (5) I logon to NetFlix using the above pseudonym John Doe, and
>> present NetFlix with both Assertion-X and Assertion-Y.
>>> (6) I agree to NetFlix's request to install a DRM-capable
> code/client
>> (and key blobs) on my browser (or even in my OS) for the purposes of
>> watching movies.
>>> (7) Optionally I may agree to NetFlix keeping track of my movie
>> habits and send me marketing offers.
>>> So from the above its necessary that all the players in the
> ecosystem
>> need to agree upon some legal basis (so called legal "trust
>> frameworks") so that one entity will accept signed assertions issued
> by
>> another (and that those assertions will stand-up in court).
>>> ps. from the security and content-protection perspective, DRM is a
>> necessary technology (apologies to anti-DRM folks).  The key aspect
> is
>> to mask away true identities via anonyms or pseudonyms, but allow
>> vendors to provide services and even allow them to obtain my de-
>> identified marketing data (either raw or aggregated).
>> This is I guess a little off the original topic, but interesting
> none-
>> the-less.
>> My question was this: assume the above model, and note that Netflix
>> does not require a 1-year commitment, or even a 1-month commitment,
> but
>> just a verified method of payment, and we offer the first month free
> on
>> a one-off basis to each person, and you may cancel at any time.
> Then,
>> how do we detect free trial fraud ? i.e. when the same person comes
>> back and asks for a free trail every month using a different
> pseudonym
>> ?
>> The above model assumes that an individual can repeatedly present
>> themselves with a different pseudonym, with no way for the service
>> provider to know it's the same individual (that's an explicit goal,
>> right). That's incompatible with our current  business practice
> where
>> we ask people to provide some kind of (roughly) immutable identity
> in
>> return for getting a free trial - it can be the payment method
> (credit
>> card number) and it can also be their device identity if it has one.
>> They do get something of real value in return for sacrificing their
>> anonymity. Seems like a fair trade that the technology should
> support,
>> no ?
>> .Mark
> Hi Mark,
> Absolutely agree with your business need.  The whole identity
> ecosystem will not take-off if businesses cannot make money. As it is
> today, no one is making money from identity provisioning/management
> and federation :-)
> Without tamper-resistant technology (like the TPM and SmartCards), its
> very challenging to get a scalable solution for your trial customer
> scenario. NetFlix could deposit DRM markers or traces on the browser
> or OS, but even these could be deleted by the user.
> We could use a NetFlix-specific assertion (kinda klunky, needs more
> thinking, mix of legal & technical):
> (i) Modify the Assertion issued by the Payment Provider (Step 4 above)
> to also assert that (a) John Doe has a true/valid credit card, and (b)
> this assertion is to be used for NetFlix only and (c) valid for 1
> month. (Call this Assertion-N)
> (ii) As part of the agreed legal trust framework between Netflix and
> the Payment Provider (PP), NetFlix has the right to request the PP to
> notify it if differing pseudonyms is requesting the PP to issue more
> than one (or two) Assertions-N per month. (ie. same credit card, same
> NetFlix target, but different pseudonyms).
> In this way, NetFlix can use the validation service of the PP anytime
> NetFlix receives an Assertion-N.  The credit card indirectly binds the
> pseudonym to the real world.

So, firstly, we don't need a bullet-proof solution to this problem: practically, if someone can get multiple free trials but it's sufficiently difficult/awkward to do then the volume will be low. How much trouble is it worth, after all, to save $7.99 ? And how many people feel comfortable committing fraud for $7.99 anyway ?

Secondly, the above could work, but seems like a lot of trouble, requiring specific support from multiple parties in order to support one specific business model in an 'anonymous' fashion. That's real costs, which ultimately get passed on to the user. Isn't it reasonable to simply say that users have to trade some anonymity for the benefit of the free trial ?


> /thomas/
Received on Friday, 9 November 2012 20:57:56 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:17:14 UTC