W3C home > Mailing lists > Public > public-html-media@w3.org > February 2013

Re: [Bug 20944] New: EME should do more to encourage/ensure CDM-level interop

From: Robert O'Callahan <robert@ocallahan.org>
Date: Thu, 28 Feb 2013 11:13:59 +1300
Message-ID: <CAOp6jLb3gfxyA0ZHOhu-PhG6wvFNPJb3e-CrfHTj-twrqW8p6A@mail.gmail.com>
To: Glenn Adams <glenn@skynav.com>
Cc: public-html-media@w3.org
On Mon, Feb 18, 2013 at 3:02 PM, Glenn Adams <glenn@skynav.com> wrote:

> I consider the following to be (some, but not all) "extension points"
> implicitly defined by HTML5:
>
>    - interface between UA and image codecs
>    - interface between UA and video codecs
>    - interface between UA and canvas context implementation
>    - interface between UA and url protocol implementation
>
> ...

> What I am claiming is that the CDM concept in EME is no different in
> principle from any of the above extension points. In particular, the same
> issues you cite for EME also hold:
>
>    - content interoperability when different UAs implement different sets
>    of codecs, canvas contexts, protocols, etc
>    - extension implementation interoperability when different UAs
>    implement different interfaces for integrating these extensions
>
>
I agree with those particulars.

Clearly this is not a black or white issue. If it were, then we wouldn't
> see UA specific features of any kind, we wouldn't see UA specific prefixed
> implementations of pre-standardized Editorial Drafts (or pre-ED proposals).
>
We wouldn't see new experimental protocols or content formats. We wouldn't
> see demands for data-* attributes, for user defined namespace support, for
> custom CSS values, or a large variety of other never- or pre-standardized
> features.
>

You're mixing up three very different kinds of things here:

-- Experimental implementations of features before standardization. We need
these to inform the standards process. An experimental feature on a path to
becoming a standard --- or that fails to become a standard and is removed
--- is fine. (There are situations where experimental implementations can
cause problems, and we're evolving our strategies for dealing with them,
but that's an entirely different topic; anyway I think we agree CDMs are
not in this category.)

-- data attributes, user-defined namespaces and other extension points used
only by Web authors. These are fine and no threat to interop as long as
user-agents do not try to interpret these extensions in any way. Indeed the
HTML5 spec says of data attributes, "User agents must not derive any
implementation behavior from these attributes or values. Specifications
intended for user agents must not define these attributes to have any
meaningful values."

-- Vendor-specific UA features which are not on a path to standardization.
These are the problem, and CDMs are in this category.

Since all of these extension points are not only present, but officially
> sanctioned via standardized extension mechanisms, the problem is not
> whether there are non-standard extensions, but how to manage the use of
> extensions, and how to [eventually] promote to standard widely adopted
> extensions.
>

The problem --- which is unique to EME --- is that promoting common CDMs to
proper standardization is off the table. No EME proponent has proposed such
a thing. Unlike codecs or image formats, there are no off-the-shelf
standard CDMs that we can expect to be used. Even my proposals, which fall
well short of full standardization, are resisted.

EME is unique among extension points because its requirements entail
restrictions on interop: a CDM that anyone can reimplement can't provide
the protection that CDMs aim to provide. That's why it's important and
appropriate to take extra steps to maximize interop given this constraint.

By the logic that I've heard suggested, failing to publish EME without
> publishing the ultimately used CDM protocols/interfaces is the same as
> failing to publish HTML{Video,Media}Element APIs without choosing specific
> video formats and a standardized code integration interface.
>

Lack of a baseline media format is an ongoing failure. But at least the
formats people actually use are well-specified, have multiple open-source
implementations (even if patent-encumbered), and are widely supported in
platforms via published APIs anyone can use. So when media elements were
specified, there was no risk that a vendor-specific media format would
become widely used on the Web. Had there been, we might have made different
choices.

I'm not sure what you mean by "code integration interface". The media
format analogy of my "binding requirement" proposal would be a requirement
that if vendor X has APIs supporting format Y in their platform, and vendor
X has a UA that supports format Y, it should be documented how to implement
support for format Y using those APIs. But for media formats this is
trivial; on every significant platform, it's well documented how to take
the bytes of a media resource and play them using the platform's media APIs.

The formats commonly used with HTML media elements already satisfy the
analogies of my proposed requirements, and there was never any doubt that
they would, so there was no need to bother making sure they would.

Any restriction to content that depends on EME would come at the hands of
> the UA vendor (by not supporting some CDM), the content author (by
> requiring a specific CDM), or the content supplier/aggregator (by making
> choices among available content with CDM dependencies). In none of these
> cases is the restriction due to a limitation of EME as a specification.
>

I've proposed concrete improvements to EME that I believe will make these
outcomes less likely. Either I'm wrong and they would have no effect, or
the absence of those improvements is a "limitation of EME as a
specification" making those outcomes worse.

The W3C's mission in my reading is to provide useful specifications that
> can be considered to represent some level of standardization that can be
> implemented by UA vendors and content authors. While doing so, it should
> promote interoperability and openness and market competition (of ideas and
> technologies). What it should not do is play the role of policy gatekeeper.
> There are competing and antithetical policies at play in the market and the
> W3C should not endeavor to take sides one way or another. Taking sides is
> just a form of censorship, no matter which side is promoted.
>

The W3C can't on the one hand promote interoperability and openness and
market competition, and at the same time refuse to take a position on
whether its own standards promote those things or reduce them. Promoting
interoperability *is* a policy.

I can't even bring myself to comment on "taking sides is just a form of
censorship".

The W3C goals you quote above do not include the concept that "every
> content should or must be available to every user". Specifically, the W3C
> mission does not preclude a market for paid access to content, including
> reasonable measure to prevent unauthorized access.
>

I agree, but we have that market and also have the goal of people being
able to pay for access to content independent of their hardware and
software choices (excluding choices that don't support any "reasonable
measures").


>
>> I described some interoperability benefits of part 1 here:
>> https://www.w3.org/Bugs/Public/show_bug.cgi?id=20944#c4
>> I thought the benefit for part 2 was pretty clear from the parenthetical
>> remark: part 2 ensures that if a CDM is implemented on top of a DRM
>> platform, then any UA with an interface to that platform is able to
>> implement the CDM.
>>
>
> Agreed, however:
>
>    - the existence of a CDM registry entry does not guarantee that an
>    implementation of the CDM is available on a given device (any more than
>    registry of a video codec format guarantees the video format is supported
>    on a device)
>    - the absence of a CDM registry entry does not prevent any set of CDMs
>    from being implemented and being available on a set of devices
>
> For example, let's say MrBigContentCompany devises a CP system and
> publishes a non-PAS definition of the system available to anyone for free
> (both specification and IPR), but only under an NDA that limits disclosure
> of an obfuscation technique used by the system. Let's call this system "
> 1.obfuscation.cdm.mrbigcontent.com".
>

It's not published if it's under NDA.

I should clarify, though, that obfuscation techniques used in the
implementation of a CDM would usually not need to be part of the
publication for my part 1 requirement --- since, as I understand it, they
typically don't affect what "goes over the wire" and hence don't affect
interop.

Now, clearly the existence of this entry in a registry doesn't guarantee
> that any or all UAs will implement it (they may be unwilling to sign the
> NDA) or that any or all content authors will permit its use for
> distribution (they might think the obfuscation inadequate).
>

Of course the registry can't *guarantee* these things. I never claimed it
would, in the bug or in this thread; you've attacked a straw man.

Rob
-- 
Wrfhf pnyyrq gurz gbtrgure naq fnvq, “Lbh xabj gung gur ehyref bs gur
Tragvyrf ybeq vg bire gurz, naq gurve uvtu bssvpvnyf rkrepvfr nhgubevgl
bire gurz. Abg fb jvgu lbh. Vafgrnq, jubrire jnagf gb orpbzr terng nzbat
lbh zhfg or lbhe freinag, naq jubrire jnagf gb or svefg zhfg or lbhe fynir
— whfg nf gur Fba bs Zna qvq abg pbzr gb or freirq, ohg gb freir, naq gb
tvir uvf yvsr nf n enafbz sbe znal.” [Znggurj 20:25-28]
Received on Wednesday, 27 February 2013 22:14:28 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 15:48:32 UTC