[Bug 26332] Applications should only use EME APIs on secure origins (e.g. HTTPS)

https://www.w3.org/Bugs/Public/show_bug.cgi?id=26332

--- Comment #47 from Ryan Sleevi <sleevi@google.com> ---
(In reply to Joe Steele from comment #46)
> When I am referring to "rogue" CDMs, I am specifically referring to CDMs
> that could negatively impact user privacy in the ways described by Section 7
> "Privacy Considerations". 

None of these are normative.
Calling them rogue CDMs is thus something that the spec doesn't really support.

> 
> It is not clear to me at least that the CDMs that exist today are behaving
> in a way that is detrimental to the user. Do you have a specific example in
> mind?

A hardware identifier is, in some circles (both UAs and users), viewed as
detrimental to the user, blinded or not.


> > The issue is that any intermediate can, for unprotected traffic, inject
> > script to use that CDM and report to an arbitrary party those results.
> > That's just how the web works.
> 
> I agree with you here. But I believe we have locked down the information
> that a CDM conforming to the privacy guidelines can provide to such a degree
> that the available information for disclosure here is no worse than any web
> application using cookies. I don't believe that this type of disclosure is
> enough reason for this API to be held to a higher standard for conforming
> CDMs.

Respectfully, I disagree, and I'm sure most members of most security teams for
most user agents would agree as well.

Cookies are hardly a shining example of where we got privacy right. It took
years for the "secure" flag to be introduced for cookies. User Agents are
already looking at ways to reduce or prohibit cookies via HTTP.

When analyzing security considerations, it's not sufficient to say something is
"not worse" than that other thing - it's a question of whether we can and
should do better. And the lesson from cookies - reiterated time and time again,
is that yes, we should.

This is not merely academic or ideological, nor is it specifically tied to
non-blinded identifiers. Reports and disclosures of nation-state monitoring and
espionage have included detailed descriptions of how blinded, purely random
identifiers delivered via cookies are being used to target users. Introducing
yet another means for users to be attacked (as the community has agreed it is,
via BCP 188) is simply unnecessary.

> I would consider a CDM that exposed permanent, non-blinded identifiers to be
> a "rogue". However exposing an identifier that is no more privacy damaging
> than a cookie does not seem like a concern to me, although some may
> disagree.

We have enough evidence to make strongly establish that this is indeed privacy
damaging.

Additionally, the mitigations of Section 7, non-normative as they are, still
force users to make tradeoffs between security and privacy.

That is, while you can point and suggest that a UA may generate a blinding
factor when a user "clears their cookies", we know that few users do, and we
know there are significant security benefits for the users that DON'T (reducing
the typing of passwords, increasing the use of password managers, etc). We also
know in practice that content providers are particularly hostile to users who
employ these methods to preserve their privacy - often practically limiting the
number of times a user may clear such identifiers. I (personally) regret to say
that both Google and Netflix are among those that impose such limits, and I
think it'd be hard to call us "rogue" in that respect.

Thus, while they exist as mitigations on paper, we know in practice that
they're insufficient, and users' will likely have these identifiers that last
for years at a time.

As a reminder, it's an entirely orthogonal issue as to whether two SITES
receive the same identifier. An attacker in a privileged position on the
network (as we have strong evidence of a number of nation-states and hostile
entities being just that) can exploit a single site to obtain a persistent
identifier to track that user as they browse and navigate.

When thinking about how best to protect privacy and security of users, our goal
should not be the minimum possible ("how it is today"), but looking at how we
can maximize this, while balancing risks ("how we SHOULD do it"). Cookies have
proven time and time again that we SHOULD do secure transports.

-- 
You are receiving this mail because:
You are the QA Contact for the bug.

Received on Wednesday, 20 August 2014 23:37:18 UTC