[Bug 26332] Applications should only use EME APIs on secure origins (e.g. HTTPS)

https://www.w3.org/Bugs/Public/show_bug.cgi?id=26332

--- Comment #110 from Ryan Sleevi <sleevi@google.com> ---
(In reply to Mark Watson from comment #109)
> There are mitigations to all the concerns described in the document (and if
> not there, in this and other threads, I believe). We can and should discuss
> the normative strength of those, as I have repeatedly said. We are at the
> very beginning of this whole discussion, not the end.

I don't think anyone is suggesting we're near the end. However, as we continue
to make progress, it's important that we set a reasonable set of expectations.
It's clear - from this bug and from the related threads going on - that these
concerns are not at all sufficiently normatively addressed. The secure origin
proposal sets forth a baseline to make sure that, as we progress on both this
issue and related, we have a reasonable path for security. Ignoring these
concerns in the spec, by objecting to any sort of requirement, especially as
implementations progress, is to do a disservice to the privacy of users and the
interoperability concerns of UAs.

> But in the end, the onus is on browser vendors to provide a viable solution
> because if the solution you provide is not viable - financially, say - sites
> will not use it. HTTPS is not there yet, as I've explained. We need
> alternatives - which for this problem clearly exist - and / or a reasonable
> industry plan to make HTTPS sufficiently reliable and efficient at scale.
> Sticking your head in the sand and expecting standards fiat to achieve that
> is not productive.

I think it's a gross mischaracterization to say that HTTPS is not sufficiently
reliable and efficient at scale. As discussed in this bug and related threads,
it's clear that the industry disagrees (e.g. the provisioning by CloudFlare and
related of free TLS for their customers or the ability of YouTube to serve
video via HTTPS).

What's clear - and certainly understandable - is that some site operators have
made a set of decisions that makes HTTPS less than desirable for them. That's
unfortunate, but also understandable in a market where content, rather than
security or scalability, are the differentiators. But that's not an intrinsic
or necessary property of TLS, as clearly demonstrated by the counter-points,
nor does it require "a reasonable industry plan" to make it reliable or
scalable, when it's clearly and demonstrably already that.

Further, I certainly object to the characterization that UAs have an onus to
"make sites use it". There are plenty of technologies that site operators have
had to invest in changes to reasonably support, whether they be new protocols
like SPDY or HTTP/2, security features such as Content Security Policy and
HSTS, or the ongoing changing in security threats, such as deprecating SSL3.0
or SHA-1. The onus of the UA is not to get sites to adopt the latest and
greatest features - it's to ensure that users' privacy and security
expectations are preserved, both from 'new' threats and from new web platform
features.

The prevalence of persistent and active attacks over HTTP by both ISPs
(http://arstechnica.com/security/2014/10/verizon-wireless-injects-identifiers-link-its-users-to-web-requests/
, http://arstechnica.com/tech-policy/2013/04/how-a-banner-ad-for-hs-ok/ ) and
by governments ( https://firstlook.org/theintercept/2014/08/15/cat-video-hack/
, http://blog.jgc.org/2011/01/code-injected-to-steal-passwords-in.html ) makes
it clear to UAs that introducing new tracking mechanisms over HTTP,
particularly one that has a strong cryptographic binding, represents a real
risk to user privacy. In such a world, the risk posed by EME over HTTP is far
greater than the risk of a site opting to not use EME at all, and any perceived
value of EME is eliminated entirely due to the privacy damage caused by
allowing it over HTTP. While there may be large sites who, in the face of an
EME+TLS requirement, will opt not to use EME at all, I think they'll find that
legacy methods - such as plugins - will also be required to use TLS or
extensive user consent in the future. In the priority of constituencies, user
security must and will ALWAYS trump site operators unfounded concerns.

I also think it's a mischaracterization to suggest that UAs are slaves to the
spec, or that the spec somehow trumps the security concerns. The spec exists to
provide interoperability between vendors, which is important, but I think you
will find that when faced with a choice of interoperability versus
security/privacy, UAs will consistently choose security/privacy. We see this
time (
http://www.theverge.com/2013/2/23/4023078/firefox-to-start-blocking-cookies-from-third-party-advertisers
) and time again ( http://www.chromium.org/developers/npapi-deprecation ). So
if the spec fails to address the security concerns - such as by failing to set
the necessary normative requirements to ensure reasonable security/privacy -
then I think we'll just see UAs going above and beyond what the spec requires
in order to meet those concerns, rightfully placing the privacy of users over
the desire of some sites to use some new feature. That's the entire point of
this bug - if the spec fails to address these concerns, then we'll just see UAs
doing it in ways that are potentially non-interoperable, because UAs MUST
protect their users.

-- 
You are receiving this mail because:
You are the QA Contact for the bug.

Received on Monday, 27 October 2014 03:24:01 UTC