W3C home > Mailing lists > Public > public-html@w3.org > February 2012

Re: Open Source implementations Re: Encrypted Media proposal (was RE: ISSUE-179: av_param - Chairs Solicit Alternate Proposals or Counter-Proposals)

From: Charles Pritchard <chuck@jumis.com>
Date: Tue, 28 Feb 2012 11:10:44 -0800
Message-ID: <4F4D26B4.5010600@jumis.com>
To: "Tab Atkins Jr." <jackalmage@gmail.com>
CC: Glenn Adams <glenn@skynav.com>, Mark Watson <watsonm@netflix.com>, Kornel LesiƄski <kornel@geekhood.net>, "<public-html@w3.org>" <public-html@w3.org>
On 2/28/2012 10:48 AM, Tab Atkins Jr. wrote:
> On Tue, Feb 28, 2012 at 10:45 AM, Glenn Adams<glenn@skynav.com>  wrote:
>> On Tue, Feb 28, 2012 at 11:35 AM, Tab Atkins Jr.<jackalmage@gmail.com>
>> wrote:
>>> On Tue, Feb 28, 2012 at 9:07 AM, Glenn Adams<glenn@skynav.com>  wrote:
>>>> 2012/2/28 Tab Atkins Jr.<jackalmage@gmail.com>
>>>>> In your other case (server is untrusted), DRM is unnecessary baggage;
>>>>> you only need JS encryption/decryption that can be inserted between
>>>>> the server and a<video>  element of the user.  This can be specified
>>>>> and implemented without many of the concerns that people have been
>>>>> raising about this proposal.
>>>> A solution that requires decryption of the actual media content in JS
>>>> would
>>>> be unacceptable from a performance perspective, particularly on resource
>>>> constrained devices. The solution must be readily implemented with
>>>> reasonable performance on devices at different ends of the spectrum,
>>>> including TV/STBs (resource constrained).
>>> emscriptem has apparently broken the order-of-magnitude barrier in
>>> C-like speeds, so performance is likely much less of an issue than you
>>> believe.
>>>
>>> However, "decryption in JS" doesn't necessarily mean "written in JS
>>> code", any more than "random numbers in JS" does.  Handing authors an
>>> encryption/decryption module written in C++ with a JS API satisfies
>>> the use-case, and still avoids the difficulties people have with DRM.
>> granted; perhaps the more important issue is whether content providers that
>> wish to use this encryption proposal will accept the possible exposure of
>> plaintext form of content to JS; it is one thing to expose it in the UA
>> implementation, but another entirely to expose it to client side JS
> I was specifically addressing the "user is trusted" case.

I've been addressing obfuscation, given the licensing arrangement that 
the recording industry has given to both audio feeds and lyrics, and 
theorizing about how that would look with video.
It does fall under the user is trusted case, but it's got that little 
quirk where the average user is unable to manipulate the stream.

It's "the average user is trusted". This meets the legal requirements of 
new arrangements, though I suspect existing providers are stuck with 
whatever scheme they've agreed on, regardless of merits, until they 
renegotiate.


>> regarding possible citations about performance on constrained devices, I am
>> basing my input on a number of years of representing a consumer electronics
>> manufacturer (Samsung) that builds both TVs and STBs; i do not have public
>> documents to cite that verifies my claim, so you'll just have to accept my
>> input (or not)... sorry
> Unfortunately, years-old performance information is effectively
> useless at this point, given both Moore's Law and the amazing progress
> of projects like emscriptem.

The issue has been with JS implementations, not hardware. Typed Arrays 
make a lot of optimizations easier to perform. I can still pop a 
years-old computer, running IE6, and Active X, and get great performance 
from compiled plugins. I suppose I can run Chrome frame as well, and 
should Google continue its support, I'll see my old machines getting 
faster and faster.

Emscripten is a great project. But it's not the one giving us 
performance. It's giving us progress, because we have two web 
implementations of LLVM codes, one with PNaCl and one with Emscripten; 
it's giving us progress in testing, because JS implementers can run 
Emscripten on various products and compare their optimizations to those 
performed by LLVM and other compilers.

I don't think it's fair to either side to stand-off like this on the 
performance issue. I asked for a citation, Glenn was unable to provide 
one, and that's OK. It doesn't mean his experience is outdated, nor does 
it mean that we just need to "wait" for hardware to get better.

I'll release some benchmarks for ChaCha next month, so we can have a 
baseline to talk about.

It's just a baseline. If a consumer electronics manufacturer wants to 
spend $8k to improve the open source JS engine they're putting into 
their equipment, it's very possible for them to improve on the baseline.
Example of a simple improvement to ChaCha based on the target chip-set: 
http://eden.dei.uc.pt/~sneves/chacha/chacha.html 
<http://eden.dei.uc.pt/%7Esneves/chacha/chacha.html>


-Charles
Received on Tuesday, 28 February 2012 19:11:08 UTC

This archive was generated by hypermail 2.3.1 : Monday, 29 September 2014 09:39:30 UTC