Re: [MEDIA_PIPELINE_TF] Common content security requirements



Sent from my iPhone

On Sep 8, 2011, at 2:14 AM, "Mo McRoberts" <Mo.McRoberts@bbc.co.uk> wrote:

> (Apologies for delayed reply, was travelling for a few days).
> 
> On 3 Sep 2011, at 04:01, Mark Watson wrote:
> 
>> Whether the codec and other choices are provided in HTML markup or in an adaptive streaming manifest is not a major difference, so I am not sure how one could be "crazy" and the other not ? Could you elaborate ?
> 
> Perhaps “crazy” is too strong: however, it seems to run counter to the point of both the <video> element's multiple-source model (there is a video, here are its resources in <n> formats)

That may be true, but is not at all crazy. Adaptive streaming was never considered when <video> was designed.

> and to the point of adaptive streaming manifests…

Not at all. The point of a manifest is to describe all the available content: bitrates, audio languages, accessibility streams, 3D versions and, yes, different codecs.

For simple applications a set of disjoint versions of the whole presentation in different containers/codes works fine, but as thongs get more complex, with multiple tracks and therefore many possible combinations, splitting into such disjoint combinations is arbitrary.
 
> except I guess where one codec is particularly more suited for one bitrate versus another. 
> 
> Okay, so there's a use-case, but that has to be balanced against having to shift functionality from one part of the chain to another, I guess. It does seem that a shim for dealing with adaptive streaming over HTTP for <video> has morphed into something which duplicates a good chunk of the functionality of <video> itself, and I'm slightly cautious that the end result is making it harder to reliably implement support for this stuff, which makes it harder — as a web developer — to make use of it.

Yes, there is some functional overlap, and we need to be clear about that and pay attention to the issues you raise.

But HTML is just one place that adaptive streaming players will appear: the task is to integrate this existing functionality into HTML.

If we were starting from scratch with the task of adding adaptive streaming to HTML, then we might work on extending the existing source selection algorithm, but that is not where we are.

> 
>> At least with MPEG DASH, the nmwnt idea has been that the adaptive streaming part might find application in multiple environments, not just HTML, so then functionality which is common to all such environments (such as codec choices) is handled within the manifest. This simplifies things for the UI (script) layer, which, after all, should not really care about technical details like codecs etc.
> 
> Except that in doing that, there's the potential that it makes it harder to create interfaces which allow for codecs, containers and DRM schemes to be plugged into browsers…
> 
> Surely, though, if taking manifests out of the browser is a desirable use-case (as QuickTime can do right now with HLS, for example, and I'm guessing WMP can do similar), and you've also ensured that you need a lump of code to mediate between the media frameworks and the servers in order to handle protected content, you've just made life difficult for yourself?

How so ? Not sure if you mean a 'lump of JavaScript code' ?

> 
>>> However, I would wonder whether it'd be a damned sight easier to make the applications smarter with respect to resource availability rather than waiting for new APIs to trickle down :)
>> 
>> New APIs are required in any case. Regarding canPlayType, I am not sure that polling every possible combination of container/codec/protection scheme is a good design. Plus there are ambiguities: if canPlayType returns true for "video/mp4; codecs='mvc1'" then do I know the player supports MVC ? or just that it supports the mp4 container ?
> 
> Does it matter? Your media resource set doesn't consist of an infinite combination of codecs, wrappers and containers — you have a finite number of things to serve, surely?

Ok, you got me, our content selection is finite. Customers complain about that ;-)

The point is just that a small number of values on several axes (codec, DRM, etc) can result in a large number of combinations.

> 
>> And it is not clear at all that the script layer should have to be bothered with metadata regarding encoding formats, codecs, containers etc.
> 
> No, it shouldn't, in an ideal world — canPlayType exists to facilitate fallback. That *also* applies to protection schemes.
> 
>>>>> It's difficult to say conclusively, because there's a distinct lack of concrete descriptions of scenarios, however it does seem that the only point at which it MIGHT become worthwhile exposing DRM scheme detail to the Javascript API in a more granular fashion is if you wanted the script itself to take an active role in obtaining and relaying authorisation keys… 
>>>> 
>>>> Yes, that's exactly the way forward I'm advocating: move responsibility for authentication and authorization to the application, where it belongs.
>>> 
>>> That's a pretty strong statement, and suggests an expectation of DRM schemes being developed which suffer from leaky abstractions… though that's not a foregone conclusion.
>> 
>> I don't understand why you think my statement suggests this, or even what you mean by "leaky abstractions". Can you explain ?
> 
> The point of <video> is that AV material can be dropped onto a webpage WITHOUT requiring a script to mediate. You've said that the script should have responsibility for this stuff, but I'm really struggling to understand it should; it represents a departure from the design principles of the <video> and <audio> elements, and so it would seem to me that a proposal to do that needs to have a robustly-detailed set of requirements and justifications.

Ok, my assumption is that there may not be a single standard DRM implemented in all browsers, and that even if there's value in reducing the scope of that 'black box' functionality.

If the DRM key exchange is handled entirely under-the-covers from a web page perspective then the user of the DRM (the service provider) needs to stand up a specialized server for each DRM proprietary authentication, authorization and key exchange protocol (and somehow ensure all those servers appy the correct business logic). This is surely more arduous than implementing the AA in JavaScript and proxying the key exchange.

> 
>> The scenario I outlined at the Berlin workshop was very different from either of the above. In summary, if a platform supports some protection scheme and is asked to play a file which can be played with that scheme then we can assume that the protection scheme component on the platform needs to communicate with a counterpart that will provide the key, using the protection scheme's key exchange protocol.
> 
> Right…
> 
>> My suggestion is that this key exchange should be transported via the application. That does not imply that the script needs to understand the protection scheme's key exchange, or that it knows anything more about the protection scheme than that the service supports it. The script is responsible for service-specific authentication and authorization and if these steps are successful then opaque key exchange messages can be exchanged between the protection scheme's client component and its service-side counterpart though this secure channel. (This does imply availability of suitable JS crypto/identity APIs, which is being discussed in the W3C identity-in-the-browser workshops.)
> 
>> This avoids the need for services to stand up multiple front-end servers for different protection schemes, each supporting their own authn/authz, with duplication of service-specific business logic in each. Architecturally it moves responsibility for well understood non-IPR-laden functions (authn, authz) to the service - where they belong - out of the protection scheme 'black box'.
> 
> Okay, so this seems to boil down to “it's easier to define an interface in a browser which alters the way <video> works so that a script can mediate between an auth server and a protection scheme than it is to define a standard interface between an auth server and a protection scheme”. Why not just define -that- interface?

You think we're going to get PlayReady, Widevine and the rest to entirely replace their proprietary protocols with an open alternative ? That would certainly be better, but is somewhat more ambitious.

I would still be concerned about whether that open protocol would meet the AA requirements of all service providers.

> Why bother involving the browser at all, especially if there's potential for things other than browsers to be sitting in between the media layer and the servers?
> 
>> Transport, containers, authentication, authorization, encryption, codecs should all be protection-scheme independent.
> 
> Auth and encryption should be protection-scheme independent? If you take the auth and crypto out of a protection scheme, what's actually left…?

Not all the crypto - just the encryption (this part is a done deal with DECE and the new ISO Common Encryption standard)

Key exchange and secure/robust implementation are what's left.

...Mark
> 
> M.
> 
> 

Received on Thursday, 8 September 2011 16:03:01 UTC