Re: DRM Today-based test case for EME

Ok, so I have updated the Pull Request per our discussion.

https://github.com/w3c/web-platform-tests/pull/3313

I would like to get moving with porting over additional tests, so if there
are no objections, and I can get the CI checks to pass, I will merge this
shortly.

Note that we presently have different content for the clearkey and drm
tests. I am just waiting on finding out the key from Greg and then I'll
drop the Chimera content.

...Mark

On Thu, Jul 21, 2016 at 5:01 PM, Mark Watson <watsonm@netflix.com> wrote:

>
>
> On Thu, Jul 21, 2016 at 4:46 PM, Jerry Smith (WPT) <jdsmith@microsoft.com>
> wrote:
>
>> I don’t understand the role of the HTML files.  My understanding was that
>> the harness would run any js files in the directory, if they were properly
>> annotated as tests.  It sounds like this can be done in an HTML file that
>> lists out the underlying tests files?
>>
>
> The harness runs all HTML files it finds in the specified directory /
> pattern. (It might also do what you say, but I'm not aware of that).
>
> Each HTML file could contain multiple tests, but we're following the
> approach where each file contains only one test, as discussed.
>
>
>
>>
>>
>> Please retain a recognizable name snippet from the current test files in
>> new “drm” copies to assist in updating coverage.
>>
>>
>>
>> *From:* Mark Watson [mailto:watsonm@netflix.com]
>> *Sent:* Thursday, July 21, 2016 12:53 PM
>> *To:* Jerry Smith (WPT) <jdsmith@microsoft.com>
>> *Cc:* Greg Rutz <G.Rutz@cablelabs.com>; David Dorwin <ddorwin@google.com>;
>> Matthew Wolenetz <wolenetz@google.com> (wolenetz@google.com) <
>> wolenetz@google.com>; Philippe Le Hegaret (plh@w3.org) <plh@w3.org>;
>> Francois Daoust <fd@w3.org>; public-hme-editors@w3.org; Iraj Sodagar <
>> irajs@microsoft.com>; John Simmons <johnsim@microsoft.com>; Paul Cotton <
>> Paul.Cotton@microsoft.com>; Sukhmal Kommidi <skommidi@netflix.com>
>>
>> *Subject:* Re: DRM Today-based test case for EME
>>
>>
>>
>>
>>
>>
>>
>> On Thu, Jul 21, 2016 at 12:45 PM, Jerry Smith (WPT) <
>> jdsmith@microsoft.com> wrote:
>>
>> Mark:  At one point, you mentioned that it should be sufficient for “drm”
>> tests to just test the single supported CDM.  This may be okay for our V1
>> test suite, but I’m not sure it’s an assumption that will hold up over
>> time.  If a browser supports two CDMs, then we need a way for it to control
>> testing of each.  We specifically will want to avoid the test logic going
>> through a list of testable DRMs and testing the first one it finds as
>> supported.
>>
>>
>>
>> We may elect to use the single CDM approach for now, but it would be good
>> to give some thought into what it would mean to support multiple.  The
>> brute force, but not very scalable solution, would be to clone the tests
>> per tested DRM.
>>
>>
>>
>> ​I think we have settled on the main test code being JS files which each
>> run one test for a provided (keysystem, media)​ pair.
>>
>>
>>
>> Then we will hand construct, and eventually auto-generate, HTML files for
>> the combinations we want.
>>
>>
>>
>> There are various ways the auto-generated HTML files could run tests for
>> multiple DRMs on a browser that supported multiple, but I think we can work
>> that out later: the main test logic will not need to change.
>>
>>
>>
>> ...Mark
>>
>>
>>
>>
>>
>>
>>
>> Jerry
>>
>>
>>
>> *From:* Greg Rutz [mailto:G.Rutz@cablelabs.com]
>> *Sent:* Thursday, July 21, 2016 12:36 PM
>> *To:* Mark Watson <watsonm@netflix.com>; David Dorwin <ddorwin@google.com
>> >
>> *Cc:* Jerry Smith (WPT) <jdsmith@microsoft.com>; Matthew Wolenetz <
>> wolenetz@google.com> (wolenetz@google.com) <wolenetz@google.com>;
>> Philippe Le Hegaret (plh@w3.org) <plh@w3.org>; Francois Daoust <fd@w3.org>;
>> public-hme-editors@w3.org; Iraj Sodagar <irajs@microsoft.com>; John
>> Simmons <johnsim@microsoft.com>; Paul Cotton <Paul.Cotton@microsoft.com>;
>> Sukhmal Kommidi <skommidi@netflix.com>
>>
>>
>> *Subject:* Re: DRM Today-based test case for EME
>>
>>
>>
>> This is excellent.  I’m glad we came up with a way to minimize code
>> duplication while still working within the W3C framework.
>>
>>
>>
>> On 7/21/16, 1:13 PM, "Mark Watson" <watsonm@netflix.com> wrote:
>>
>>
>>
>>
>>
>>
>>
>> On Thu, Jul 21, 2016 at 11:49 AM, David Dorwin <ddorwin@google.com>
>> wrote:
>>
>> I assumed the mp4 and webm directories were just the content, which is
>> currently the case. Other than some targeted tests, such as testing
>> specific initDataTypes, "encrypted" event generation for various formats,
>> or testing playback of specific types, most tests should be
>> media-independent. See how
>> https://github.com/w3c/web-platform-tests/pull/3317 finds any supported
>> type and uses it. (The tests that use media files need some additional
>> work.)
>>
>>
>>
>> Thus, I think (drm|clearkey)-xxxx.html should be sufficient. It would be
>> nice if we didn't need to maintain wrappers, but this will work for now.
>> Writing the tests in .js files also makes it easier to add more tests later
>> if we or implementers wish. We should design the JS files with such
>> extensibility in mind. For example:
>>
>> function runTest(keySystem = null, mediaType/Config = null) {
>>
>> if (!keySystem) selectSupportedNonClearKeyKeySystem();
>>
>> if (!mediaType) getSupportedConfigAndMediaFiles();
>>
>> // Do test.
>>
>> }
>>
>>
>>
>> While not required now, it would be nice if we could automatically
>> generate the .html files with a script. For example, for each file in the
>> test-scripts/ directory, generate an HTML file that calls it for each of
>> "drm" and "clearkey. Again, implementers and others could update this
>> script to test multiple commercial DRM systems and/or types (or even modify
>> it to run the tests in their own infrastructure without necessarily
>> generating the HTML files.)
>>
>>
>>
>> Please review and merge the PR above before migrating the existing tests.
>>
>>
>>
>> ​Ok, done.
>>
>>
>>
>> Sukhmal is working on a configurable test. Likely it will accept a
>> "config" object and then it would indeed be a good idea for it to fill in
>> any missing fields with default values. The configurable things to begin
>> with will be the DRM type and the media files / types.
>>
>>
>>
>> It should then be possible to auto-generate the HTML files, but perhaps
>> we'll create a few by hand to begin with and see how we go.
>>
>>
>>
>> ...Mark
>>
>>
>>
>> ​
>>
>>
>>
>>
>>
>>
>> On Thursday, July 21, 2016, Greg Rutz <G.Rutz@cablelabs.com> wrote:
>>
>> OK — Given the limitations of the test framework, Mark’s approach seems
>> acceptable to me.
>>
>>
>>
>> On 7/21/16, 8:08 AM, "Mark Watson" <watsonm@netflix.com> wrote:
>>
>>
>>
>> Hi Greg,
>>
>>
>>
>> You cannot pass arguments to the tests, or configure the test runner to
>> run multiple times with different arguments.
>>
>>
>>
>> You can run multiple tests from one HTML file (WebCrypto has files with
>> tens of thousands of tests), which is what I originally proposed on June
>> 21st. But there were comments saying we should have one test per HTML file.
>> Additionally, they tend to time out, so for our tests involving playback
>> you cannot do too many. At this point we should pick an approach. We only
>> have a week left.
>>
>>
>>
>> I was not proposing duplicating all the test code in every HTML file. I
>> was proposing a JS file which could run any of four versions of the test
>> (drm|clearkey)x(webm|mp4) and then four HTML files which each basically set
>> the configuration and call the JS. So, the actual test code would be common
>> between DRM and ClearKey as you suggest.
>>
>>
>>
>> What is missing in my proposal is the possibility to test multiple DRMs
>> on one browser. But we have no browsers that support multiple DRMs, so I
>> suggest we leave that for another day.
>>
>>
>>
>> Could I get comments on the Pull Request asap, please. I'd like to devote
>> some time today to creating more tests following that pattern.
>>
>>
>>
>> ...Mark
>>
>>
>>
>>
>>
>>
>>
>> On Thu, Jul 21, 2016 at 4:00 AM, Greg Rutz <G.Rutz@cablelabs.com> wrote:
>>
>> (apologies for my late response — I’m in Europe this week)
>>
>>
>>
>> I am unfortunately not familiar with the W3C test harness.  Is it at all
>> possible to pass “arguments” when you select a test to run?  It seems that
>> by extending the JSON configuration that is currently used for the
>> multi-DRM (drmconfig.json), you could also pass the media mime types for
>> particular test configuration.  So, instead of having separate HTML test
>> files for each media type, it could simply be passed in as part of the test
>> configuration.
>>
>>
>>
>> Also, do we really need separate files for ClearKey?  I understand that
>> not all tests would be valid for a ClearKey configuration, but isn’t
>> ClearKey just another key system in the eyes of the EME spec?  Sure, the
>> specs provides some normative language to describe what key messages look
>> like, but other than that, you still create key sessions, retrieve a
>> license (in some fashion), and pass that license to update().
>>
>>
>>
>> I know we are trying to get this done soon and this might be proposing
>> too much of a complex architecture into the tests, but EME seems like a
>> pretty new paradigm within the W3C that has so many optional features that
>> it would make sense to minimize the amount of “cut-and-paste” test code
>> just to support additional key systems and media types.
>>
>>
>>
>> G
>>
>>
>>
>> On 7/20/16, 7:06 PM, "Mark Watson" <watsonm@netflix.com> wrote:
>>
>>
>>
>> All,
>>
>>
>>
>> I have some time tomorrow to work on this and would like us to start
>> making progress on the drm tests, so that we can have a substantial number
>> ready this week. Our deadline is, after all, basically the end of next week.
>>
>>
>>
>> Has anyone had a chance to review the Pull Request I sent this-morning ?
>> Is that a good template ? I would prefer not to invest time migrating lots
>> of tests to that pattern only to have people ask for significant changes to
>> be applied to many files.
>>
>>
>>
>> Can we agree to the model of four HTML files for each test (clearkey-mp4,
>> clearkey-webm, drm-mp4, drm-webm) calling a common JS test file ?
>>
>>
>>
>> Finally, one possibility for also getting results for tests using
>> polyfills would be to create a script which can take all the tests and add
>> polyfill <script> elements to create new scripts in a subdirectory. You
>> would then have a complete copy of all tests, with an easy way to
>> regenerate (the polyfilled versions may or may not be checked in).
>>
>>
>>
>> ...Mark
>>
>>
>>
>> On Wed, Jul 20, 2016 at 4:58 PM, Mark Watson <watsonm@netflix.com> wrote:
>>
>>
>>
>>
>>
>> On Wed, Jul 20, 2016 at 4:42 PM, Jerry Smith (WPT) <jdsmith@microsoft.com>
>> wrote:
>>
>> Would these actually be specific DRMs?
>>
>>
>>
>> drm-mp4-temporary-cenc.html
>>
>> drm-webm-temporary-cenc.html
>>
>>
>>
>> i.e., separate files for each drm supported in test.  That would group
>> Widevine and PlayReady files together, so they would likely execute as in
>> sequence (and as a group).
>>
>>
>>
>> Or does “drm” stand for “multi-drm”?
>>
>>
>>
>> ​It just means using a DRM rather than using ClearKey. Which DRM to use
>> would depend on the browser (I'm assuming each browser only supports one
>> and the test auto-detects which one to use).
>>
>>
>>
>> ...Mark​
>>
>>
>>
>>
>>
>>
>>
>> *From:* Mark Watson [mailto:watsonm@netflix.com <watsonm@netflix.com>]
>> *Sent:* Wednesday, July 20, 2016 4:18 PM
>> *To:* Jerry Smith (WPT) <jdsmith@microsoft.com>
>> *Cc:* David Dorwin <ddorwin@google.com>; Greg Rutz <G.Rutz@cablelabs.com>;
>> Matthew Wolenetz <wolenetz@google.com> (wolenetz@google.com) <
>> wolenetz@google.com>; Philippe Le Hegaret (plh@w3.org) <plh@w3.org>;
>> Francois Daoust <fd@w3.org>; public-hme-editors@w3.org; Iraj Sodagar <
>> irajs@microsoft.com>; John Simmons <johnsim@microsoft.com>; Paul Cotton <
>> Paul.Cotton@microsoft.com>; Sukhmal Kommidi <skommidi@netflix.com>
>>
>>
>> *Subject:* Re: DRM Today-based test case for EME
>>
>>
>>
>>
>>
>>
>>
>> On Wed, Jul 20, 2016 at 3:37 PM, Jerry Smith (WPT) <jdsmith@microsoft.com>
>> wrote:
>>
>> A RegExpr can tell the runner to repeat each found test (under some path)
>> to re-run for a list of keySystems?  That sounds pretty good.
>>
>>
>>
>> ​No, it can just select a subset of the html files to run.​
>>
>>
>>
>>
>>
>> Does this work better if scripts are in a sub-folder?  If so, then maybe
>> these folders under encrypted-media make sense:
>>
>>
>>
>> -          clearkey
>>
>> -          multidrm
>>
>> -          mp4
>>
>> -          webm
>>
>> -          util
>>
>>
>>
>> ​Well, there are permutations and combinations:
>>
>> - any clearkey test that involves media could be run with either mp4 or
>> webm, but it is not clear that it is necessary to do so.
>>
>> - the drm tests on some browsers will only work with mp4/cenc
>>
>>
>>
>> ​Here's a suggestion for a naming convention:
>>
>>
>>
>> (drm|clearkey)-(mp4|webm)-xxxx.html
>>
>>
>>
>> We could then have a file, generic-xxxx.js, which could contain most of
>> the test code which could be called from the (at most) 4 html files names
>> as above.
>>
>>
>>
>> We could convert the proposed drmtoday-temporary-cenc.html into
>> generic-temporary-cenc.js and
>>
>>
>>
>> drm-mp4-temporary-cenc.html
>>
>> drm-webm-temporary-cenc.html
>>
>> clearkey-mp4-temporary-cenc.html
>>
>> clearkey-webm-temporary-cenc.html
>>
>>
>>
>> WDYAT ?
>>
>>
>>
>> ...Mark
>>
>>
>>
>>
>>
>>
>>
>> Jerry
>>
>>
>>
>> *From:* Mark Watson [mailto:watsonm@netflix.com <watsonm@netflix.com>]
>> *Sent:* Wednesday, July 20, 2016 3:29 PM
>> *To:* David Dorwin <ddorwin@google.com>
>> *Cc:* Greg Rutz <G.Rutz@cablelabs.com>; Matthew Wolenetz <
>> wolenetz@google.com> (wolenetz@google.com) <wolenetz@google.com>; Jerry
>> Smith (WPT) <jdsmith@microsoft.com>; Philippe Le Hegaret (plh@w3.org) <
>> plh@w3.org>; Francois Daoust <fd@w3.org>; public-hme-editors@w3.org;
>> Iraj Sodagar <irajs@microsoft.com>; John Simmons <johnsim@microsoft.com>;
>> Paul Cotton <Paul.Cotton@microsoft.com>; Sukhmal Kommidi <
>> skommidi@netflix.com>
>> *Subject:* Re: DRM Today-based test case for EME
>>
>>
>>
>>
>>
>>
>>
>> On Wed, Jul 20, 2016 at 3:11 PM, David Dorwin <ddorwin@google.com> wrote:
>>
>> The abstraction Greg describes makes sense, at least to my rough
>> understanding. Greg, would we vary the test configurations or are all
>> configurations always present and just a way of isolating the logic for
>> each key system?
>>
>>
>>
>> In case there is any uncertainty, I want to emphasize that most of the
>> "Google clearkey tests" are really just EME API tests that happen to use
>> Clear Key. (The reason they use Clear Key (and WebM) has is related to the
>> fact that they are Blink layout tests that run inside a subset of the code,
>> pass in Chromium, and not depend on external servers.) Most interact with
>> at least a portion of the Clear Key CDM implementation, meaning the
>> behavior and results depend in part on the Clear Key implementation. This
>> is similar to how most media tests are also testing a specific
>> pipeline/decoder. There are some tests that explicitly test Clear Key
>> behavior defined in https://w3c.github.io/encrypted-media/#clear-key,
>> and we should ensure these are labeled "clearkey" in the path. Everything
>> else should probably be converted to general tests.
>>
>>
>>
>> ​Ok, so IIUC, the process we should follow for each test currently in the
>> Google directory (and any others we want to add) is:
>>
>> (i) migrate this test to the framework / utilities we have just proposed,
>> including the drmtoday infractructure, to create a test using a real DRM
>>
>> (ii) make a copy of that test that just uses the Clear Key options in
>> that same framework / utilities
>>
>>
>>
>> (It may not make sense to do both for every test)
>>
>>
>>
>> After we have migrated all the tests, we can remove the Google directory.
>>
>>
>>
>> We would then have mp4 versions of all the tests and we may want to
>> (re)create some WebM ones. I don't expect we need to do every test with
>> both WebM and mp4.
>>
>>
>>
>> The only way I can see to selectively run tests is to specify a path or
>> RegExp in the test runner, so ​we should agree on a naming convention
>> and/or folder heirarchy to organize the tests.
>>
>>
>>
>>
>>
>> Mark, my concern is that using Clear Key, which is almost certainly
>> simpler than any other system, could paper over API design, etc. issues for
>> other systems. In practice, I don't think this should be an issue since
>> Edge doesn't implement Clear Key. (Thus, I also think we should err on the
>> side of excluding Clear Key for now.)
>>
>>
>>
>> ​It's a valid concern, but so is the problem that we have a hard
>> deadline, so I think we should err on the side of gathering as much
>> evidence as we can and providing it with appropriate caveats.
>>
>>
>>
>> ...Mark​
>>
>>
>>
>>
>>
>>
>>
>> For full coverage, all supported combinations would be executed
>> (something I discussed
>> <https://lists.w3.org/Archives/Public/public-hme-editors/2016Jun/0100.html>
>> earlier
>> <https://lists.w3.org/Archives/Public/public-hme-editors/2016Jun/0104.html>). It
>> would be nice if we could get results for the general tests run on each key
>> system (and type), but we'd need to create some infrastructure.
>>
>>
>>
>> David
>>
>>
>>
>>
>>
>>
>>
>> On Wed, Jul 20, 2016 at 1:17 PM, Mark Watson <watsonm@netflix.com> wrote:
>>
>> Greg - this makes sense and it would be easy to take the drmtoday test we
>> have written and make a new clearkey version of that by enhancing the utils
>> and the config as you describe.
>>
>>
>>
>> However, we already have a clearkey version of that test in the Google
>> directory (which uses its own utils). So, doing what you say would increase
>> the commonality / consistency between the tests, but it wouldn't get us
>> more tests.
>>
>>
>>
>> David - the clearkey results are useful information for the
>> implementation report. Again, as with tests based on polyfills, they
>> validate the API design, implementability and specification. These are
>> factors in the decision as well as the current state of commercially useful
>> features in commercial browsers. We are in the unusual situation of not
>> being able to just wait until implementations have matured, so this is
>> going to be an unusual decision.
>>
>>
>>
>> ...Mark
>>
>>
>>
>> On Wed, Jul 20, 2016 at 1:05 PM, Greg Rutz <G.Rutz@cablelabs.com> wrote:
>>
>> For (B), I wasn’t suggesting that there be two different tests in one
>> file, I was suggesting that we put operations like license requests into
>> utils files that would perform either DRMToday or ClearKey license
>> requests.  For DRMToday, the implementation in these utils files would make
>> the request to the actual DRMToday license server.  For ClearKey, the
>> implementation would likely return a response message that is placed into
>> the test configuration JSON (drmconfig.json in the example test created by
>> Sukhmal).  The JSON config file can help configure both the key system and
>> the desired license response message that we need in order to properly
>> execute the test.
>>
>>
>>
>> G
>>
>>
>>
>> On 7/20/16, 1:30 PM, "Mark Watson" <watsonm@netflix.com> wrote:
>>
>>
>>
>> So, what we have right now is:
>>
>> (1) A large number of ClearKey-only tests in a "Google" folder, and
>>
>> (2) One of those tests (basic playback) migrated to DRM Today, in the
>> root folder
>>
>>
>>
>> There are two approaches:
>>
>> (A) Keep ClearKey and DRM tests separate: move the "Google" tests into
>> the root or a "clearkey" folder, continue making new DRMToday versions of
>> each of those ClearKey tests
>>
>> (B) Make the DRMToday test also support ClearKey, continue making new
>> ClearKey+DRMToday versions of each of the Google tests and, eventually,
>> drop the Google folder
>>
>>
>>
>> For (B), we need to run two tests in one file, which requires some care
>> with async tests and there's been comments that we should not have multiple
>> tests in one file.
>>
>>
>>
>> Opinions ?
>>
>>
>>
>> ...Mark
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> On Wed, Jul 20, 2016 at 11:27 AM, Greg Rutz <G.Rutz@cablelabs.com> wrote:
>>
>> I think the test utilities should be designed to be as DRM-independent as
>> possible.  This would allow us to run any of the test cases that apply to
>> ClearKey simply by providing a DRMConfig and test content that indicates
>> use of ClearKey.  I apologize that I have not been following the EME spec
>> progression that much over the last 12-18 months, but I recall there not
>> being a ton of differences between ClearKey support and other DRMs as I
>> implemented it in dash.js.
>>
>>
>>
>> For test cases that are valid for ClearKey, the test case would simply
>> execute multiple times on the UA under test — once with ClearKey content
>> and one or more additional times for the “real” DRMs that are to be tested
>> on that UA.  No sense in maintaining separate test code if we don’t have to.
>>
>>
>>
>> G
>>
>>
>>
>> On 7/20/16, 10:34 AM, "Mark Watson" <watsonm@netflix.com> wrote:
>>
>>
>>
>> Question: should we expand this test case to cover ClearKey ? Or will we
>> rely on the tests in the Google folder for ClearKey ?
>>
>>
>>
>> If the latter, should we move those tests into the main directory (I see
>> they are now working) ? Or, if others would like to add ClearKey tests,
>> should they add them to the Google folder ?
>>
>>
>>
>> ...Mark
>>
>>
>>
>> On Tue, Jul 19, 2016 at 7:18 PM, Mark Watson <watsonm@netflix.com> wrote:
>>
>> All,
>>
>>
>>
>> Sukhmal has created a Pull Request for a temporary session test case
>> using DRM Today. We have tested this on Chrome with Widevine and it should
>> work on Edge with PlayReady as well:
>>
>>
>>
>> https://github.com/w3c/web-platform-tests/pull/3313
>>
>>
>>
>> Please review this and comment on whether it is a good template / model
>> for us to work from. We can quickly migrate more of the Google clearkey
>> tests to drmtoday as well as implementing tests for other session types
>> based on this model.
>>
>>
>>
>> ...Mark
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>
>

Received on Friday, 22 July 2016 00:06:15 UTC