W3C home > Mailing lists > Public > public-web-and-tv@w3.org > September 2011

Re: [MEDIA_PIPELINE_TF] Common content security requirements

From: Mo McRoberts <Mo.McRoberts@bbc.co.uk>
Date: Thu, 8 Sep 2011 10:14:01 +0100
Cc: "Mays, David" <David_Mays@Comcast.com>, "public-web-and-tv@w3.org" <public-web-and-tv@w3.org>
Message-Id: <B325E38A-0E7F-41C4-8599-55108FBB4D39@bbc.co.uk>
To: Mark Watson <watsonm@netflix.com>
(Apologies for delayed reply, was travelling for a few days).

On 3 Sep 2011, at 04:01, Mark Watson wrote:

> Whether the codec and other choices are provided in HTML markup or in an adaptive streaming manifest is not a major difference, so I am not sure how one could be "crazy" and the other not ? Could you elaborate ?

Perhaps “crazy” is too strong: however, it seems to run counter to the point of both the <video> element's multiple-source model (there is a video, here are its resources in <n> formats) and to the point of adaptive streaming manifests… except I guess where one codec is particularly more suited for one bitrate versus another. 

Okay, so there's a use-case, but that has to be balanced against having to shift functionality from one part of the chain to another, I guess. It does seem that a shim for dealing with adaptive streaming over HTTP for <video> has morphed into something which duplicates a good chunk of the functionality of <video> itself, and I'm slightly cautious that the end result is making it harder to reliably implement support for this stuff, which makes it harder — as a web developer — to make use of it.

> At least with MPEG DASH, the idea has been that the adaptive streaming part might find application in multiple environments, not just HTML, so then functionality which is common to all such environments (such as codec choices) is handled within the manifest. This simplifies things for the UI (script) layer, which, after all, should not really care about technical details like codecs etc.

Except that in doing that, there's the potential that it makes it harder to create interfaces which allow for codecs, containers and DRM schemes to be plugged into browsers…

Surely, though, if taking manifests out of the browser is a desirable use-case (as QuickTime can do right now with HLS, for example, and I'm guessing WMP can do similar), and you've also ensured that you need a lump of code to mediate between the media frameworks and the servers in order to handle protected content, you've just made life difficult for yourself?

>> However, I would wonder whether it'd be a damned sight easier to make the applications smarter with respect to resource availability rather than waiting for new APIs to trickle down :)
> New APIs are required in any case. Regarding canPlayType, I am not sure that polling every possible combination of container/codec/protection scheme is a good design. Plus there are ambiguities: if canPlayType returns true for "video/mp4; codecs='mvc1'" then do I know the player supports MVC ? or just that it supports the mp4 container ?

Does it matter? Your media resource set doesn't consist of an infinite combination of codecs, wrappers and containers — you have a finite number of things to serve, surely?

> And it is not clear at all that the script layer should have to be bothered with metadata regarding encoding formats, codecs, containers etc.

No, it shouldn't, in an ideal world — canPlayType exists to facilitate fallback. That *also* applies to protection schemes.

>>>> It's difficult to say conclusively, because there's a distinct lack of concrete descriptions of scenarios, however it does seem that the only point at which it MIGHT become worthwhile exposing DRM scheme detail to the Javascript API in a more granular fashion is if you wanted the script itself to take an active role in obtaining and relaying authorisation keys… 
>>> Yes, that's exactly the way forward I'm advocating: move responsibility for authentication and authorization to the application, where it belongs.
>> That's a pretty strong statement, and suggests an expectation of DRM schemes being developed which suffer from leaky abstractions… though that's not a foregone conclusion.
> I don't understand why you think my statement suggests this, or even what you mean by "leaky abstractions". Can you explain ?

The point of <video> is that AV material can be dropped onto a webpage WITHOUT requiring a script to mediate. You've said that the script should have responsibility for this stuff, but I'm really struggling to understand it should; it represents a departure from the design principles of the <video> and <audio> elements, and so it would seem to me that a proposal to do that needs to have a robustly-detailed set of requirements and justifications.

> The scenario I outlined at the Berlin workshop was very different from either of the above. In summary, if a platform supports some protection scheme and is asked to play a file which can be played with that scheme then we can assume that the protection scheme component on the platform needs to communicate with a counterpart that will provide the key, using the protection scheme's key exchange protocol.


> My suggestion is that this key exchange should be transported via the application. That does not imply that the script needs to understand the protection scheme's key exchange, or that it knows anything more about the protection scheme than that the service supports it. The script is responsible for service-specific authentication and authorization and if these steps are successful then opaque key exchange messages can be exchanged between the protection scheme's client component and its service-side counterpart though this secure channel. (This does imply availability of suitable JS crypto/identity APIs, which is being discussed in the W3C identity-in-the-browser workshops.)

> This avoids the need for services to stand up multiple front-end servers for different protection schemes, each supporting their own authn/authz, with duplication of service-specific business logic in each. Architecturally it moves responsibility for well understood non-IPR-laden functions (authn, authz) to the service - where they belong - out of the protection scheme 'black box'.

Okay, so this seems to boil down to “it's easier to define an interface in a browser which alters the way <video> works so that a script can mediate between an auth server and a protection scheme than it is to define a standard interface between an auth server and a protection scheme”. Why not just define -that- interface? Why bother involving the browser at all, especially if there's potential for things other than browsers to be sitting in between the media layer and the servers?

> Transport, containers, authentication, authorization, encryption, codecs should all be protection-scheme independent.

Auth and encryption should be protection-scheme independent? If you take the auth and crypto out of a protection scheme, what's actually left…?

Received on Thursday, 8 September 2011 09:14:30 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 20:44:04 UTC