[Bug 26332] Applications should only use EME APIs on secure origins (e.g. HTTPS)

https://www.w3.org/Bugs/Public/show_bug.cgi?id=26332

--- Comment #111 from Mark Watson <watsonm@netflix.com> ---
(In reply to Ryan Sleevi from comment #110)
> (In reply to Mark Watson from comment #109)
>  
> > But in the end, the onus is on browser vendors to provide a viable solution
> > because if the solution you provide is not viable - financially, say - sites
> > will not use it. HTTPS is not there yet, as I've explained. We need
> > alternatives - which for this problem clearly exist - and / or a reasonable
> > industry plan to make HTTPS sufficiently reliable and efficient at scale.
> > Sticking your head in the sand and expecting standards fiat to achieve that
> > is not productive.
> 
> I think it's a gross mischaracterization to say that HTTPS is not
> sufficiently reliable and efficient at scale. 

Well, this is just what our real-world at-scale data suggests. You can choose
to change your opinion in the face of factual data, or not. Up to you. Either
way, these are problems which can be solved, but not ignored.

For example, some browsers have made massive strides in recent years on TLS
reliability (specifically, the frequency with which TLS connection setup
fails). But this is not universal ... yet.

It would be great if you could publish server capacity figures from YouTube for
HTTP vs HTTPS - I sent you the names of our contacts there who had that
information.

> 
> Further, I certainly object to the characterization that UAs have an onus to
> "make sites use it".

I don't believe I said that. I said the onus on UAs is to 'provide viable
solutions'. The only alternative to viable solutions in UAs is plugins and
failing that, native apps.

> 
> The prevalence of persistent and active attacks over HTTP by both ISPs
> (http://arstechnica.com/security/2014/10/verizon-wireless-injects-
> identifiers-link-its-users-to-web-requests/ ,
> http://arstechnica.com/tech-policy/2013/04/how-a-banner-ad-for-hs-ok/ ) and
> by governments (
> https://firstlook.org/theintercept/2014/08/15/cat-video-hack/ ,
> http://blog.jgc.org/2011/01/code-injected-to-steal-passwords-in.html ) makes
> it clear to UAs that introducing new tracking mechanisms over HTTP,
> particularly one that has a strong cryptographic binding, represents a real
> risk to user privacy.

Again, as far as I understand it, there is no reason from our side that CDMs
integrated with desktop UAs should introduce tracking concerns that are worse
than cookies - at least for the basic level of robustness expected for desktop
browsers. I don't believe there is a requirement for 'strong cryptographic
binding'. With your permission, I could provide more information as to what I
know about your solution in this respect.

> In such a world, the risk posed by EME over HTTP is
> far greater than the risk of a site opting to not use EME at all, and any
> perceived value of EME is eliminated entirely due to the privacy damage
> caused by allowing it over HTTP.

Well, this is your call if you really think EME over HTTP is worse that plugins
over HTTP.

You could also remove support for plugins without making EME available as an
alternative.

You're of course free to cut off support for parts of the web in your browser,
if you consider those parts too dangerous for your users. You could have
disabled Silverlight last year or the year before, but you didn't. What changed
? The security / privacy properties of Silverlight ? No, the availability of a
viable alternative made it possible. This was a good thing, no ?

> While there may be large sites who, in the
> face of an EME+TLS requirement, will opt not to use EME at all, I think
> they'll find that legacy methods - such as plugins - will also be required
> to use TLS or extensive user consent in the future. In the priority of
> constituencies, user security must and will ALWAYS trump site operators
> unfounded concerns.

The repeated suggestion that we do not care about user privacy or security is,
frankly, quite tiresome. This whole effort, over the last four years on my
part, has been about migrating from the wild west of plugins to a model where
this functionality is provided by User Agent implementors and so, amongst other
important things, privacy and security are in the User Agent implementors'
hands. And in practice this has already been achieved for desktop IE, Safari,
Chrome and in due course I expect for Firefox, all over HTTP and with the User
Agent implementors fully aware of the privacy properties. It's hugely
disappointing to see this jeopardised just as it's coming to fruition. 

> 
> I also think it's a mischaracterization to suggest that UAs are slaves to
> the spec, or that the spec somehow trumps the security concerns. The spec
> exists to provide interoperability between vendors, which is important, but
> I think you will find that when faced with a choice of interoperability
> versus security/privacy, UAs will consistently choose security/privacy. We
> see this time (
> http://www.theverge.com/2013/2/23/4023078/firefox-to-start-blocking-cookies-
> from-third-party-advertisers ) and time again (
> http://www.chromium.org/developers/npapi-deprecation ). So if the spec fails
> to address the security concerns - such as by failing to set the necessary
> normative requirements to ensure reasonable security/privacy - then I think
> we'll just see UAs going above and beyond what the spec requires in order to
> meet those concerns, rightfully placing the privacy of users over the desire
> of some sites to use some new feature. That's the entire point of this bug -
> if the spec fails to address these concerns, then we'll just see UAs doing
> it in ways that are potentially non-interoperable, because UAs MUST protect
> their users.

Sure, and this is why we have a consensus process, which guarantees that the
spec cannot ship if you really oppose it (the definition of consensus, by the
way, is the lack of sustained opposition, so you need not be afraid that if you
have a valid point you voice will be heard). I'm all in with that model. To me
it means that we commit to adapting our service to be based on the spec,
whatever it eventually says. Or, put another way, I won't agree to a spec we
wouldn't be able to adapt to, I'll keep working with the rest of the group
until we get to a solution which satisfies all the concerns and you all know
that so you know it's worth investing in the process.

I expect others to approach this the same way. Recent events make we wonder if
you, Google, are signed up to the same thing as me ?

An open standardization process is not only about documenting interoperable
behavior - a private group of UA implementors could do that on their own with
much less overhead. It's about committing to take seriously the concerns of
multiple stakeholders, to keep on working until there is consensus, and in
deference to the value that brings a willingness to accept consensus-based
outcomes.

-- 
You are receiving this mail because:
You are the QA Contact for the bug.

Received on Monday, 27 October 2014 05:52:06 UTC