[Bug 26332] Applications should only use EME APIs on secure origins (e.g. HTTPS)

https://www.w3.org/Bugs/Public/show_bug.cgi?id=26332

--- Comment #112 from Ryan Sleevi <sleevi@google.com> ---
(In reply to Mark Watson from comment #111)
> For example, some browsers have made massive strides in recent years on TLS
> reliability (specifically, the frequency with which TLS connection setup
> fails). But this is not universal ... yet.

I don't want to pivot this bug into a discussion of TLS reliability, but I do
want to make sure to at least address and disabuse the meme that somehow this
TLS is a browser issue. It's not, nor is the behaviour of 'legacy' browsers
relevant for discussions of EME (as these browsers won't, by definition,
support EME), nor is there some 'massive strides on TLS reliability' effort
going on by browsers - it's servers recognizing that configuring TLS is a
manageable, tractable problem that can easily be addressed by engineering. 

> It would be great if you could publish server capacity figures from YouTube
> for HTTP vs HTTPS - I sent you the names of our contacts there who had that
> information.

That's not really relevant or germane, given that this information has been
provided in the past. This is not a "Google" experience that TLS scales - see
http://blog.cloudflare.com/universal-ssl-how-it-scales/ for CloudFlare,
http://lists.w3.org/Archives/Public/ietf-http-wg/2012JulSep/0251.html for
Facebook, or https://blog.twitter.com/2013/forward-secrecy-at-twitter for
Twitter - not to mention Google's own experiences at
https://www.imperialviolet.org/2010/06/25/overclocking-ssl.html

The point is that yes, it's possible to engineer scalable TLS. It's also
possible to engineer inefficient TLS. It's not an intrinsic property that TLS
doesn't scale - quite the opposite, it's just like any other engineering
problem, and one which has known solutions.

This is why I continue to object to the suggestion that this requires "industry
efforts" to make TLS scale. It does, the information is readily available and
deployed, the only issue is that it requires acting upon.


> > Further, I certainly object to the characterization that UAs have an onus to
> > "make sites use it".
> 
> I don't believe I said that. I said the onus on UAs is to 'provide viable
> solutions'. The only alternative to viable solutions in UAs is plugins and
> failing that, native apps.

It's quite clear what you said. UAs have an onus to provide viable solutions in
order that sites will use it. If UAs don't provide viable solutions, sites
won't use it. However, both statements presume that the goal is that sites use
EME. No, the goal of UAs is that they preserve the minimum of user privacy and,
where such privacy is insufficient (as is the use of HTTP cookies), that they
continue to work to improve the status quo in existing specs, and ensure they
don't repeat the same mistakes in new specs.

> Again, as far as I understand it, there is no reason from our side that CDMs
> integrated with desktop UAs should introduce tracking concerns that are
> worse than cookies - at least for the basic level of robustness expected for
> desktop browsers.

That's clearly not true from at least two vendors' solutions, nor is it
required in the spec, nor does the spec currently make a distinction as to the
robustness requirements for different platforms or form factors (e.g. desktop
vs mobile). It's also clear that content providers do not share that view - as
you know, several content providers require that the device ID "not" be
trivially copiable between machines for that CDM solution to be acceptable.

Rather, the spec steers rather far away from such robustness requirements,
precisely because these differ on a content provider by content provider basis
and, for many content providers, vary studio by studio, and in ways that cannot
be shared or discussed publicly (as has been suggested in the past)

Since you're now introducing a gradient to the discussion - that different
platforms (or, as is more likely, different _content_, such as SD vs HD) have
different robustness requirements - are you suggesting that the spec should
normatively introduce these differences, as well as normatively require how the
different requirements are met. For example, for "Desktop", a CDM MUST NOT
introduce any more privacy bits than that afforded by the User-Agent String
(e.g. for users on the same OS and UA, any identifiers will be identical among
all users with that OS+UA), whereas for "High Def" content, if a CDM attests to
a unique device identifier (per-origin or otherwise), it MUST be served over a
secure transport. Is that a solution that you consider viable?

> Well, this is your call if you really think EME over HTTP is worse that
> plugins over HTTP.

Or, as you like to state, it's as bad as plugins over HTTP. And plugins are
horrible for user security and privacy, as has been repeatedly shown for the
past decade, and for which UAs are absolutely working on communicating that
risk - and the concerns - to users.

> You're of course free to cut off support for parts of the web in your
> browser, if you consider those parts too dangerous for your users. You could
> have disabled Silverlight last year or the year before, but you didn't. What
> changed ? The security / privacy properties of Silverlight ? No, the
> availability of a viable alternative made it possible. This was a good
> thing, no ?

And isn't that the goal of this discussion - to make sure that EME actually
meets the bare minimum security and privacy requirements of 2014, rather than
barely struggling to meet those of 1999? I'm not sure how to take your
reasoning here, other than "You can't turn off plugins unless you give us
something as privacy-hostile as plugins", which is of course false. If the
alternative is as bad as the problem, then clearly some new solution will be
found by UAs.

> The repeated suggestion that we do not care about user privacy or security
> is, frankly, quite tiresome. This whole effort, over the last four years on
> my part, has been about migrating from the wild west of plugins to a model
> where this functionality is provided by User Agent implementors and so,
> amongst other important things, privacy and security are in the User Agent
> implementors' hands. And in practice this has already been achieved for
> desktop IE, Safari, Chrome and in due course I expect for Firefox, all over
> HTTP and with the User Agent implementors fully aware of the privacy
> properties. It's hugely disappointing to see this jeopardised just as it's
> coming to fruition. 

It's clear that you don't care to the same degree we do, or value it to the
same degree we do, otherwise this discussion would be moot. Nor is anything
being jeopardized - EME continues to progress, and the deficiencies in the spec
with regards to privacy and security are slowly being addressed, although in
ways that some are not happy with.

> An open standardization process is not only about documenting interoperable
> behavior - a private group of UA implementors could do that on their own
> with much less overhead. It's about committing to take seriously the
> concerns of multiple stakeholders, to keep on working until there is
> consensus, and in deference to the value that brings a willingness to accept
> consensus-based outcomes.

I suspect you're far more optimistic about the W3C process than how it works. A
spec that fails to take in the concerns of UAs, regardless of how much
consensus it has among non-UAs, is a spec that isn't implemented. This has been
the case time and time again in a variety of SDOs, but can be trivially seen
with both XHTML and HTML5.

I definitely balk at the suggestion that UAs can't or shouldn't protect user
privacy simply because it's expensive for certain entrenched players. UAs can,
must, and will take the privacy concerns as paramount (as they should). I think
we can and should continue to explore solutions for addressing the concerns
being discussed, and hopefully the W3C will provide a venue for site operators
such as yourself and UA vendors such as us to express the concerns and
understand the solutions.

But let's not mistakenly presume that the spec has primacy. Again, and as
you've seen from multiple vendors (Mozilla, Apple, and Microsoft all included),
if a spec fails to meaningful address the security concerns, UAs will take
appropriate steps. Sometimes that means not implementing a spec at all,
sometimes it means disabling certain features (as was seen with third-party
cookies), and sometimes it means placing requirements above and beyond what the
spec requires, since the spec itself fails to take into consideration user
security.

Since we know there is interest in UAs to implement, and we know there's
interest in sites to use, let's try to find workable, normative requirements
for EME that can meaningfully address the risks that have been identified. If
this means normatively specifying robustness requirements, let's have that
discussion. But if workable solutions aren't being put forth - and from this
bug, they really aren't - then we're going to be "stuck" with at least a bare
minimum of requiring a secure top-level document origin.

-- 
You are receiving this mail because:
You are the QA Contact for the bug.

Received on Monday, 27 October 2014 06:48:44 UTC