Re: [Bug 20944] New: EME should do more to encourage/ensure CDM-level interop

On Sun, Feb 17, 2013 at 4:36 PM, Robert O'Callahan <robert@ocallahan.org>wrote:

> I told Glenn I'd follow up his comment
> https://www.w3.org/Bugs/Public/show_bug.cgi?id=20944#c2 on this list.
>
> Glenn Adams wrote:
>
>> (In reply to comment #0)
>>
> This "bug" report appears to assert that conceptual CDM architecture,
>> which is essentially a UA dependent implementation feature outside the
>> scope of EME, should be proscribed in terms of how a UA vendor chooses to
>> support specific CDM functionality and which functionality is to be
>> supported by that vendor.
>>
>> This form of mandate would represent an abrupt shift in the independence
>> of UA implementors in terms of which W3C specifications are implemented, as
>> well as how implementation specific decisions are made.
>>
>
> You seem to be objecting to the idea that a specification with extension
> points can require those extensions to be subject to some form of
> standardization. In previous email I provided evidence that this is common
> practice.
>

I consider the following to be (some, but not all) "extension points"
implicitly defined by HTML5:

   - interface between UA and image codecs
   - interface between UA and video codecs
   - interface between UA and canvas context implementation
   - interface between UA and url protocol implementation

The reason I consider them to be extensions is because the set of codecs,
contexts, and protocol services are unbounded, and not predefined by HTML5
or other referring W3C specifications. Nor does the referring
specifications define or mandate a minimum level of specific
implementations of these services. [Some external, non-W3C specifications
do define profiles that mandate specific minimum support.]

I am not disputing that one could standardize these extension points, and
some de facto standards have emerged, such as NSAPI. However, even then,
the set of extension implementations has not been standardized.

What I am claiming is that the CDM concept in EME is no different in
principle from any of the above extension points. In particular, the same
issues you cite for EME also hold:

   - content interoperability when different UAs implement different sets
   of codecs, canvas contexts, protocols, etc
   - extension implementation interoperability when different UAs implement
   different interfaces for integrating these extensions

[Whether a UA implementation prefers to call such an extension a "internal
module" or an "extension" is not relevant in my view. More specifically, I
am not suggesting that such an extension is associated with a user based
plug-in mechanism. It may be, it may not be.]

This state of affairs is simply a statement of fact about existing
interoperability issues without taking EME into account. If you wish to
dispute this characterization in separate thread, then I can do so, but my
experience as a browser developer informs my opinion and tells me this is
true.


> I agree that W3C specs usually haven't been explicit about it, but I think
> we as a community have assumed extensions that aren't standardized are a
> problem.
>

Clearly this is not a black or white issue. If it were, then we wouldn't
see UA specific features of any kind, we wouldn't see UA specific prefixed
implementations of pre-standardized Editorial Drafts (or pre-ED proposals).
We wouldn't see new experimental protocols or content formats. We wouldn't
see demands for data-* attributes, for user defined namespace support, for
custom CSS values, or a large variety of other never- or pre-standardized
features.

Since all of these extension points are not only present, but officially
sanctioned via standardized extension mechanisms, the problem is not
whether there are non-standard extensions, but how to manage the use of
extensions, and how to [eventually] promote to standard widely adopted
extensions.

That EME defines its processing semantics by making use of a didactic
device labeled CDM is not unusual in any manner. That EME does not choose
to pre-define a specific set of CDMs is also not unusual, but is in accord
with existing practices such as the separation of abstractions from
implementations, the division of semantics into abstract layers, etc.

I remember implementing a Layer 2 protocol called ChaosNet for
communicating with Symbolics work stations. I also remember implementing a
Layer 4 protocol called XNS-SPP to interoperate with Silicon Graphics work
stations. That these protocols could be used interoperably over Ethernet
and IP based networks attests to the notion of layering and abstraction of
the 7 level ISO protocol stack. Defining that stack did not requiring
pre-defining the bindings at each protocol layer.

Opposing EME because it doesn't pre-define CDMs or that by doing so reduces
interoperability seems like an exercise in throwing out the accepted
principles of software and system design. I hope that is not what you are
suggesting.

Going to back to your list, at the risk of repeating earlier discussion:
> -- if a UA vendor supports a media format, font format or canvas context
> that is not standardized and not on any path to standardization, then we
> have a problem. The size of the problem depends on how aggressively the
> vendor is promoting that feature and how much adoption it has received (or
> will receive).
>

The current HTML5 content architecture supports separation of the
standardization of mechanisms to use and access content (of different
forms) and the process of specifying and standardizing content formats and
protocols.

EME does exactly this. It provides a generic access API that is effectively
transparent to any specific CDM implementation. The process of specifying
and potentially standardizing specific CDMs, either in terms of their
intrinsic content and protocol semantics or their concrete interfaces (for
integrating with a UA) need not and should not precede the publishing of
EME itself.

By the logic that I've heard suggested, failing to publish EME without
publishing the ultimately used CDM protocols/interfaces is the same as
failing to publish HTML{Video,Media}Element APIs without choosing specific
video formats and a standardized code integration interface.

Since the W3C with strong industry support has chosen to publish
HTML{Video,Media}Element without mandating a minimum set of video codecs or
defining a codec integration interface, then it does not seem reasonable to
object to EME using principles that were not followed with media interfaces.

Is this the perfect world? No. But there is no such thing. We don't have
perfect knowledge of what we need either in terms of media format support
or content encryption support for the next 10 years or the next 1 year for
that matter. The architecture and process reflect this state of affairs,
and, therefore, the same principles should apply for EME.


> > Such an outcome would be antithetical to the mission of the W3C,
>> > and the W3C should not bless, appear to bless, or enable such scenarios.
>>
>
You appear to see this as black or white. The mission of the W3C is to
promote those principles that you cite below. Promotion does not guarantee
delivery. Even if it does deliver, it doesn't mean delivery occurs all at
once.

The content community and apparently the majority of UA vendors officially
support EME. The W3C Team has concluded that it is in scope of the HTML
Charter.

If there are reasonable, concrete actions that can be accomplished to
improve the semantics of EME or the API functions it defines, then I'm all
in favor of doing so. But claims that EME somehow threatens the mission of
the W3C or threatens the ability of a UA vendor to implement and deliver
EME functionality do not ring true to my ear or experience.



>
>> The W3C mission is well defined by its published documents and
>> particularly the W3C Process Document.
>
>
> How about this one?
> http://www.w3.org/Consortium/mission.html
> "Web For All: The social value of the Web is that it enables human
> communication, commerce, and opportunities to share knowledge. One of W3C's
> primary goals is to make these benefits available to all people, whatever
> their hardware, software, ..."
> "Web on Everything: The number of different kinds of devices that can
> access the Web has grown immensely. Mobile phones, smart phones, personal
> digital assistants, interactive television systems, voice response systems,
> kiosks and even certain domestic appliances can all access the Web."
>
> Restricting content to a particular UA runs counter to those goals.
>

Any restriction to content that depends on EME would come at the hands of
the UA vendor (by not supporting some CDM), the content author (by
requiring a specific CDM), or the content supplier/aggregator (by making
choices among available content with CDM dependencies). In none of these
cases is the restriction due to a limitation of EME as a specification.

This is no different than today's world with Flash and SilverLight, or with
certain UA dependent video codecs, etc. So, yes, I would agree that EME
doesn't solve the general interoperability problem that is permitted by EME
in terms of admitting multiple CDMs. But I would also insist that this is
not a new problem. Its one we have today with other content formats,
codecs, APIs, etc.

The W3C's mission in my reading is to provide useful specifications that
can be considered to represent some level of standardization that can be
implemented by UA vendors and content authors. While doing so, it should
promote interoperability and openness and market competition (of ideas and
technologies). What it should not do is play the role of policy gatekeeper.
There are competing and antithetical policies at play in the market and the
W3C should not endeavor to take sides one way or another. Taking sides is
just a form of censorship, no matter which side is promoted.

For example, on the issue of whether the W3C should "bless DRM controlled
content" by virtue of publishing EME, a technology that *could* be used to
disseminate such content, my response is:

   - the W3C does not bless any specific usage of any technology it defines
   - the W3C does not have ultimate control over which of the technologies
   it defines are adopted by UA vendors or adopted by content authors

In other words, the W3C does not control the Web. It defines useful
technologies, guidelines, etc., that *may* be supported by implementations
and that *may* be used by content authors. There is no way around this
basic interoperability problem. EME doesn't change this state of affairs.

It is the marketplace (of ideas, technologies, business goals, content,
etc) that defines what the Web is. And today, that marketplace includes DRM
controlled content, disseminated every day to millions of users that
willingly pay for access to that content. EME does not change this
equation, except that it could improve interoperability by making content
available on a wider variety of devices than is the case if one is limited
to Flash and Silverlight.


>
> The development of the EME specification has occurred within the charter
>> of the HTML WG in accordance with the W3C Process (unless the reporter is
>> claiming otherwise).
>>
>
> Following the W3C Process is not enough to ensure that the W3C's goals are
> met. Creative decision-making will always be required.
>

The W3C goals you quote above do not include the concept that "every
content should or must be available to every user". Specifically, the W3C
mission does not preclude a market for paid access to content, including
reasonable measure to prevent unauthorized access.


>
> > My proposed fix is to have EME require CDMs to be registered in a central
>> > registry. To be registered, a CDM would have to meet the following
>> > conditions:
>> >
>> > 1) Documentation must be published describing the complete operation of
>> the
>> > CDM, in enough detail to enable independent implementation in
>> user-agents
>> > and to enable content deployment by content providers, except for some
>> set
>> > of secret keys whose values may be withheld. (Similar to but weaker than
>> > IANA's "specification required" registry policy.)
>> >
>> > 2) If the CDM vendor offers functionality to third parties to decrypt
>> > content that can be decrypted by the CDM, then it must publish
>> documentation
>> > describing how to implement the CDM using that functionality. (E.g. if
>> a DRM
>> > platform vendor implements a CDM using that DRM platform, other
>> consumers of
>> > that platform must also be able to implement the same CDM.)
>>
>> The reporter does not cite any technical reason why interoperability
>> would be enhanced in the presence of such a registry or reduced in the
>> absence of such a registry. However, given other similar registries, such
>> as the IETF MIME Type registry, it may useful to consider this suggestion
>> for the simple purpose of reducing the likelihood of CDM identification
>> conflict (though one might argue that the current specified use of a
>> reversed domain name already provides adequate identification conflict).
>
>
> That is not the goal of this registry. I agree that the goal of avoiding
> ID conflict doesn't justify a registry.
>
> I described some interoperability benefits of part 1 here:
> https://www.w3.org/Bugs/Public/show_bug.cgi?id=20944#c4
> I thought the benefit for part 2 was pretty clear from the parenthetical
> remark: part 2 ensures that if a CDM is implemented on top of a DRM
> platform, then any UA with an interface to that platform is able to
> implement the CDM.
>

Agreed, however:

   - the existence of a CDM registry entry does not guarantee that an
   implementation of the CDM is available on a given device (any more than
   registry of a video codec format guarantees the video format is supported
   on a device)
   - the absence of a CDM registry entry does not prevent any set of CDMs
   from being implemented and being available on a set of devices

For example, let's say MrBigContentCompany devises a CP system and
publishes a non-PAS definition of the system available to anyone for free
(both specification and IPR), but only under an NDA that limits disclosure
of an obfuscation technique used by the system. Let's call this system "
1.obfuscation.cdm.mrbigcontent.com".

Now, clearly the existence of this entry in a registry doesn't guarantee
that any or all UAs will implement it (they may be unwilling to sign the
NDA) or that any or all content authors will permit its use for
distribution (they might think the obfuscation inadequate).

On the other hand, the absence of this entry in a registry doesn't prevent
any UA from implementing or prevent any content author from requiring its
use for distribution. A significant community of support and use may emerge
over time. Eventually, a registry entry could be created (or not), but that
is independent of the utility or extent of use of the system.

My conclusion is that, while defining a registry is certainly reasonable
(and I would certainly support it if is similar in nature to the MIME Type
registry, i.e., permits vendor specific and private registrations with
reduced disclosure requirements), a registry is not a pre-requisitie for
publishing EME, and certainly not a pre-requisite for publishing a FPWD of
EME.

Regards,
Glenn

Received on Monday, 18 February 2013 02:02:53 UTC