[Bug 26332] Applications should only use EME APIs on secure origins (e.g. HTTPS)

https://www.w3.org/Bugs/Public/show_bug.cgi?id=26332

--- Comment #64 from Joe Steele <steele@adobe.com> ---
(In reply to Ryan Sleevi from comment #61)
> (In reply to Joe Steele from comment #60)
> > The key requests made by some DRMs fall exactly into this category of "very
> > short connections". One packet out, one packet in. The overhead of
> > negotiating an SSL channel (which may ultimately add nothing to the
> > security) can be almost 100%. Even if we wait for TLS 1.3 as suggested. 
> 
> Luckily, this is not a specification that deals with legacy DRM systems that
> are implemented inefficiently. Your CDM does not have a network connection
> (normatively), it defers to the UA to mediate all key exchanges.
> 
> It's amazing how exceedingly efficient UAs are. TLS session resumption.
> Connection pools. HTTP keep-alive. Novel technologies like XMLHttpRequest or
> WebSockets. All of these exist, and despite your "key exchange" or "drm"
> protocol being "one packet in, one packet out", it's nearly virtually
> impossible to actually find yourself establishing a new connection every
> time, or dealing with that overhead.

None of this efficiency makes any difference in this case. The CDM is
constructing the request - which in our case is a single packet. The
application can use any mechanism to send it it likes, but HTTP is good enough
in our case and quite efficient. TLS would be overkill and not add anything to
the security. 

> 
> Equally, I think you'll be hard pressed to find a single EME/CDM
> implementation that's sending as many packets of video stream data as
> they're receiving, so you surely cannot mean that.

I do not mean that. I am referring to the latency that will result from the CDN
delivering the media stream having to re-encrypt each media segment for
delivery.

> 
> So, especially as demonstrated by browsers (and the updated version includes
> even more real world data from a variety of high-capacity sites), your
> overhead is virtually nil.

No. The overhead in my case in particular is large if TLS is used. Less with
the better algorithms described in that document, but still large relative to
using HTTP. 

> 
> Also, I think the concerns about latency are a bit misinformed. Latency
> matters every bit as much for websites serving content as they do video
> providers. Milliseconds of latency are measured in impacts of millions of
> dollars. Seconds of latency are measured in billions. If the latency impact
> from SSL has not shown to be crippling online commerce, I think you can rest
> assured it won't compromise streaming either.

I will defer to folks who actually implement streaming of live video on that
point. But I can say it has been raised as a serious concern by our customers
in the past. We spend a significant amount of effort trying to reduce latency
for our customers rather than increase it. 

(In reply to Ryan Sleevi from comment #62)
> (In reply to Joe Steele from comment #56)
> > I don't think we are arguing that TLS is not viable (at least I am not). I
> > am arguing that HTTP with message-based encryption is equally viable and has
> > certain advantages. We should allow implementations to leverage those
> > advantages when they want to.
> 
> Frankly, this isn't the case of any of the DRM protocols that I've seen. Nor
> do the affordances of message-based encryption protocols, such as Netflix's
> description of their desire for WebCrypto over HTTP, meet the security
> standard expected by UAs (and our constituencies!) for user privacy and
> confidentiality.

But you have not seen them all. And yet you are proposing to restrict all of
them based on the subset you have seen. 

> 
> Nor do I think we can argue that a robustly analyzed and audited protocol is
> somehow less desirable than individual vendors' home-grown protocols, for
> which it is a design goal of the product to make it difficult to analyze or
> reason about, and which short of the UAs individually implementing the
> protocol from scratch and auditing it, cannot have any assurances afforded
> even to the UA.

Your assumptions seem to be that all DRM protocols are home-grown and not based
on robust well analyzed protocols. You have not offered any proof of this other
than your experience. There is no reason that the protocol itself has to be
difficult to analyze or reason about, it may just not be public which protocol
is being used. This argument seems to be getting back to requiring CDMs to be
fully documented. Maybe this conversation should move to that bug (Bug 20944).

> > There is a good writeup on a weakness specific to SSL/TLS here --
> > http://www.thoughtcrime.org/blog/ssl-and-the-future-of-authenticity. 
> > Perhaps ironically, the tightly controlled message-based encryption used by
> > many DRM are not subject to these issues and thus are more secure than SSL
> > in this sense at least.
> 
> I suspect any refutal to this will verge so far off topic that we'll end up
> in the weeds. To the extent that I say I cannot let misinformation stand, I
> would say that the conclusion you reach is not at all supported by the
> article. Among the many reasons that this is, consider the most simplest
> response this: The public can audit the behaviour of CAs, and CAs business
> interests are aligned with promoting security (as the alternative is
> obsolence). The public CANNOT audit CDMs (as has been repeatedly established
> here that this be the outcome, even if the spec allows for hypothetically
> audited CDMs), and the business interests of CDMs is inherently geared
> towards creating a model of "too big to fail" (i.e. that they're an
> inextricable part of certain large media streaming sites, and as such, no UA
> can effectively disable or reject the CDM, for fear of breaking the
> experience for the users).
> 
> The rest we can save for a separate discussion in another forum, if it
> should somehow becomes necessary to show how a singular monolithic and
> opaque entity is worse than a diverse and robust competitive space with
> public audits and transparency.

Nice. You try to refute the argument and then say "let's take this elsewhere"
implying I would be churlish to respond. Well played sir. 

I am sure when you read the article you realized the implication is that the
public CANNOT audit the behavior of CAs to any reasonable degree. And what is
worse, even when those CA's have been proven to be bad actors, we can't always
move away from them because they are indeed "too big to fail".

-- 
You are receiving this mail because:
You are the QA Contact for the bug.

Received on Thursday, 21 August 2014 23:25:04 UTC