RE: [Bug 20944] New: EME should do more to encourage/ensure CDM-level interop

FWIW, I actually agree with a few of Glenn's points regarding Robert's proposals.

There does not appear to be anything that a 'standard' can do to compel a CDM
author to publicly reveal the operation of a CDM or even the interface.

The proposed solution 'to enable independent implementation in user-agents'
does not appear to be generally practical because this assumes that it is
possible to maintain 'protection' when implemented in this context.  For example,
a CDM implemented in a web browser would give the browser access to the
decoded stream allowing the content to be saved, and alternatively an open
source stack could have an OS level 'stream access' feature to save content.

It would appear to me that the only practical CDMs for limiting access by the
user to the decoded stream are:

A. A proprietary web browser on a proprietary stack (TV etc).  The CDM may
be implemented within the web browser or at the OS level or in other
hardware and the system could restrict the ability for the user to
conveniently save the decoded content.

B. A proprietary stack that provides a protected context for the CDM
that an open source web browser can defer to, and that does not permit
the open source web browser to access the decoded stream for
implementing a convenient 'save as' option.

C. Perhaps a trusted computing stack with proprietary keys that allows
the content author to verify that the decoding 'stack' meets their terms
and to discriminate on this basic.  Could this be possible?

None of these would be compatible with Roberts proposal.

Glenn, perhaps you could provide some details of the architecture of the
'standard' you support emerging and explain how it could supported
content protection on open source stacks?

Mark has also stated that Netflix have a 'solution' for Firefox on an open
source stack and it would be helpful in understanding your position if
the architecture could be explained?

Also could you please clarify the level of protection these solutions provide?
This is critical, because seeking to have standard CDMs emerge that only
secures the content from a third party threat is very different than seeking
standards that restrict the users ability to save the content.

cheers
Fred


From: glenn@skynav.com
Date: Fri, 1 Mar 2013 08:39:50 -0700
To: robert@ocallahan.org
CC: public-html-media@w3.org
Subject: Re: [Bug 20944] New: EME should do more to encourage/ensure CDM-level interop



On Wed, Feb 27, 2013 at 3:13 PM, Robert O'Callahan <robert@ocallahan.org> wrote:


On Mon, Feb 18, 2013 at 3:02 PM, Glenn Adams <glenn@skynav.com> wrote:



I consider the following to be (some, but not all) "extension points" implicitly defined by HTML5:interface between UA and image codecsinterface between UA and video codecs




interface between UA and canvas context implementationinterface between UA and url protocol implementation
...



What I am claiming is that the CDM concept in EME is no different in principle from any of the above extension points. In particular, the same issues you cite for EME also hold:


content interoperability when different UAs implement different sets of codecs, canvas contexts, protocols, etc

extension implementation interoperability when different UAs implement different interfaces for integrating these extensions
I agree with those particulars.




Clearly this is not a black or white issue. If it were, then we wouldn't see UA specific features of any kind, we wouldn't see UA specific prefixed implementations of pre-standardized Editorial Drafts (or pre-ED proposals).


We wouldn't see new experimental protocols or content formats. We wouldn't see demands for data-* attributes, for user defined namespace support, for custom CSS values, or a large variety of other never- or pre-standardized features.




You're mixing up three very different kinds of things here:

-- Experimental implementations of features
 before standardization. We need these to inform the standards process. An 
experimental feature on a path to becoming a standard --- or that fails 
to become a standard and is removed --- is fine. (There are situations where experimental implementations can cause problems, and we're evolving our strategies for dealing with them, but that's an entirely different topic; anyway I think we agree CDMs are not in this category.)




-- data attributes, user-defined namespaces and other extension points used only by Web authors. These are fine and no threat to interop as long as user-agents do not try to interpret these extensions in any way. Indeed the HTML5 spec says of data attributes, "User agents must not derive any implementation behavior from these attributes or values.
  Specifications intended for user agents must not define these attributes to have any meaningful
  values."

-- Vendor-specific 
UA features which are not on a path to standardization. These are the problem, and CDMs are in this category.






Since all of these extension points are not only present, but officially sanctioned via standardized extension mechanisms, the problem is not whether there are non-standard extensions, but how to manage the use of extensions, and how to [eventually] promote to standard widely adopted extensions.



The problem --- which is unique to EME --- is that promoting common CDMs to proper standardization is off the table. No EME proponent has proposed such a thing. Unlike codecs or image formats, there are no off-the-shelf standard CDMs that we can expect to be used. Even my proposals, which fall well short of full standardization, are resisted.



(1) you are putting the cart before the horse; we won't get "off-the-shelf standard CDMs" until EME is published and implementation activity occurs; normally in the W3C we draft a spec, go through a few public working drafts, have a LC, then a call for implementation phase (CR); you appear to be asking for the results of the CR phase to precede the FPWD, which is contrary to W3C process (or at least not expected or required);


(2) you seem to have ignored the fact that at least Netflix and Cox (in my role as their representative) have indicated that we would like to see the eventual standardization of CDMs.


(3) if your proposals are resisted, it is because they don't have industry or community support; if you can find support for the content community, then please propose new CDMs for standardization with their blessing; as for your proposal for a registry, Cox has no problem with this proposal as I've indicated in previous email, provided it is partitioned into standards-based and non-standards-based segments;


So again, I do not find your claims that EME is unique (among other extensions mechanisms) to be compelling.




EME is unique among extension points because its requirements entail restrictions on interop: a CDM that anyone can reimplement can't provide the protection that CDMs aim to provide.


This is a mis-characterization of EME. EME does not place any such restrictions on interop. This is a deployment decision by users, just like whether MP4 is used rather than WebM. 

 That's why it's important and appropriate to take extra steps to maximize interop given this constraint.

I have no issue with maximizing interop, but not at the price of throwing out the baby with the bath water, which objections to a FPWD seemed aimed at doing. Further, interop is neither a binary condition nor has a discrete transition space. Changes to interop are continuous with many intermediate levels. Changes to interop occur over extended time intervals.


None of us can predict with certainty whether interop will improve with EME, but I'm willing to predict (at least with certainty in my own mind), that interop will not become worse with EME.

 
 
By the logic that I've heard suggested, failing to publish EME without publishing the ultimately used CDM protocols/interfaces is the same as failing to publish HTML{Video,Media}Element APIs without choosing specific video formats and a standardized code integration interface.


Lack of a baseline media format is an ongoing failure. But at least the formats people actually use are well-specified, have multiple open-source implementations (even if patent-encumbered), and are widely supported in platforms via published APIs anyone can use. So when media elements were specified, there was no risk that a vendor-specific media format would become widely used on the Web. Had there been, we might have made different choices.




I'm not sure what you mean by "code integration interface". The media format analogy of my "binding requirement" proposal would be a requirement that if vendor X has APIs supporting format Y in their platform, and vendor X has a UA that supports format Y, it should be documented how to implement support for format Y using those APIs. But for media formats this is trivial; on every significant platform, it's well documented how to take the bytes of a media resource and play them using the platform's media APIs.



To use EME as a point of departure, by "code integration interface" I mean the (unspecified) interface between the UA and the CDM. I would not agree that a vendor "should" document this interface (for CDMs or media players). It is up to the vendor. There are some UAs where the UA <-> Media Player interface is publicly discoverable (e.g., Gecko and WebKit), there are some UAs where it is not (Presto, Trident) [or at least I'm not aware of a disclosed description or specification].


The point I'm making is that the CDM concept and interface abstraction is no different than the internal UA <-> Media Player interface: it is an internal matter for UA implementors.


Of course, if someone wishes to define a standard API for UA <-> CDM interfacing, then I'd certainly not object to it. But it isn't necessary as a pre-requisite for publishing EME or for obtaining content interoperability with specific CDMs.

 

The formats commonly used with HTML media elements already satisfy the analogies of my proposed requirements, and there was never any doubt that they would, so there was no need to bother making sure they would.




Any restriction to content that depends on EME would come at the hands of the UA vendor (by not supporting some CDM), the content author (by requiring a specific CDM), or the content supplier/aggregator (by making choices among available content with CDM dependencies). In none of these cases is the restriction due to a limitation of EME as a specification.



I've proposed concrete improvements to EME that I believe will make these outcomes less likely. Either I'm wrong and they would have no effect, or the absence of those improvements is a "limitation of EME as a specification" making those outcomes worse.



As I've indicated, I don't object to defining a registry. Do you have other concrete suggestions? 



The W3C's mission in my reading is to provide useful specifications that can be considered to represent some level of standardization that can be implemented by UA vendors and content authors. While doing so, it should promote interoperability and openness and market competition (of ideas and technologies). What it should not do is play the role of policy gatekeeper. There are competing and antithetical policies at play in the market and the W3C should not endeavor to take sides one way or another. Taking sides is just a form of censorship, no matter which side is promoted.


The W3C can't on the one hand promote interoperability and openness and market competition, and at the same time refuse to take a position on whether its own standards promote those things or reduce them. Promoting interoperability *is* a policy.



This topic appears to revolve around whether folks think EME promotes or reduces interoperability and openness. I happen to think that it promotes interoperability and openness over the status quo. Others seem to think otherwise. So we don't have a consensus about the basic condition here.


As for W3C obligations, I believe its obligation is to follow its documented process and to encourage its mission's goals. As for promoting interoperability, I haven't seen an objective score card on the overall interoperability of the W3C's works. If such a score card were ever written, I could offer a great deal of evidence of how the W3C has reduced interoperability, e.g., by promoting alternative specifications with overlapping (or identical) functional goals.



I can't even bring myself to comment on "taking sides is just a form of censorship".




The W3C goals you quote above do not include the concept that "every content should or must be available to every user". Specifically, the W3C mission does not preclude a market for paid access to content, including reasonable measure to prevent unauthorized access.



I agree, but we have that market and also have the goal of people being able to pay for access to content independent of their hardware and software choices (excluding choices that don't support any "reasonable measures").



And it is very much the goal of content owners and content service providers to increase that access footprint to other hardware and software, and we believe that EME will improve this situation.

 



 I described some interoperability benefits of part 1 here: https://www.w3.org/Bugs/Public/show_bug.cgi?id=20944#c4






I thought the benefit for part 2 was pretty clear from the parenthetical remark: part 2 ensures that if a CDM is implemented on top of a DRM platform, then any UA with an interface to that platform is able to implement the CDM.






Agreed, however:the existence of a CDM registry entry does not guarantee that an implementation of the CDM is available on a given device (any more than registry of a video codec format guarantees the video format is supported on a device)




the absence of a CDM registry entry does not prevent any set of CDMs from being implemented and being available on a set of devicesFor example, let's say MrBigContentCompany devises a CP system and publishes a non-PAS definition of the system available to anyone for free (both specification and IPR), but only under an NDA that limits disclosure of an obfuscation technique used by the system. Let's call this system "1.obfuscation.cdm.mrbigcontent.com".



It's not published if it's under NDA.

No. "published" != PAS 

I should clarify, though, that obfuscation techniques used in the implementation of a CDM would usually not need to be part of the publication for my part 1 requirement --- since, as I understand it, they typically don't affect what "goes over the wire" and hence don't affect interop.



It depends. But one cannot make this assumption (that it isn't necessary to know the portion of the specification governed by an NDA which imposes some obfuscation/secret).

 



Now, clearly the existence of this entry in a registry doesn't guarantee that any or all UAs will implement it (they may be unwilling to sign the NDA) or that any or all content authors will permit its use for distribution (they might think the obfuscation inadequate).



Of course the registry can't *guarantee* these things. I never claimed it would, in the bug or in this thread; you've attacked a straw man.



I don't believe I've "attacked" anything. I have simply expressed some reservations about what a (strawman of a) registry will accomplish from a practical perspective. I've said numerous times I wouldn't object to a registry that is similar to the MIME Types registry.


Regards,Glenn
 		 	   		  

Received on Saturday, 2 March 2013 02:29:24 UTC