Re: Additional EME tests

On Thu, Jun 23, 2016 at 12:01 AM, David Dorwin <ddorwin@google.com> wrote:

> For Blink, we tried to follow our understanding of the WPT style, which
> was that each test case be a separate file. In some cases, especially the
> syntax tests, there are multiple categories of tests together. I think
> readability is also important, even if it means duplication. (Of course,
> API changes or refactorings can be monotonous when they have to be applied
> to many files, but that should be rarer now.) As to which approach we take
> for new tests, I defer to the WPT experts.
>
> I think we probably do want individual tests for various media types, etc.
> For example, downstream users (i.e. user agent vendors) should be able to
> say "I know I don't support foo, so all the "-foo.html" tests are expected
> to fail. For tests that aren't specifically about a type (or key system),
> the tests should select a supported one and execute the tests.
>

​Certainly, there need to be individual tests, but a single file can
contain several tests. The test page reports for each file how many of the
tests within passed and how many failed.​ In WebCrypto we have a file with
20,000 tests :-)

However, I do like that the blink tests are small and easy to read. Another
reason is that the WPT framework has a 60s timeout for any given file.
Since it takes a few seconds to start and verify playback, we can't have
too many tests in one file unless we can adjust this timeout.

Ideally, we need to generalize on at least 5 axes, either by generalizing
the tests as they are, or by creating new files with the different versions
of each test:
- test all the media types the browser claims to support
- test all the initData types the browser claims to support
- test all the session types the browser claims to support
- test all the key systems the browser claims to support
- for cenc, test both keysystem-specific and common format initData

​We do not need to test every possible combination of the above and we
don't need to run every one of the existing blink tests for each of these
combinations. But it is not straightforward to work out which combinations
we do need and which tests need to run on multiple combinations.

We perhaps need a utility function which calculates which combinations of
the above a browser claims to support (as a subset of the combinations the
test framework supports). There would then be one test which looks at the
supported combinations and checks it is non-empty :-)

The list of supported combinations would then be an input to at least some
of the other tests, which would then test each combination individually.


>
> Ideally, it would be possible to force such tests to run all supported
> variants. For example, Chrome might want to run the tests with both MP4 and
> WebM. encrypted-media-syntax.html, for example, tries both WebM and/or CENC
> types based on whether they are supported, requires all supported to pass,
> and ensures that at least one was run. This has the advantage of testing
> both paths when supported, though it's not verifiable anywhere that both
> ran. I don't know whether it would be useful to be able to say run all the
> tests with WebM then repeat with CENC.
>
> Regarding the test content, it would be nice to use a common set of keys
> across all the tests and formats. This will simplify utility functions,
> license servers, debuggin, etc. Also, we may want to keep the test files
> small.
>

For our part, we don't have a workflow to easily package content with a
specific key / key id. There is test mp4 content, cropped to ~10 seconds,
in the branch linked below. Do you have a way to create a WebM file with
the same key / key id ? I guess we could then hard code all the Clear Key
messages.

​...Mark​



>
> David
>
> On Tue, Jun 21, 2016 at 9:16 PM, Mark Watson <watsonm@netflix.com> wrote:
>
>> All,
>>
>> I have uploaded some additional EME test cases here:
>> https://github.com/mwatson2/web-platform-tests/tree/clearkey-success/encrypted-media
>>
>> I have not created a pull request, because there is overlap with the
>> Blink tests.
>>
>> I have taken a slightly different approach, which is to define one
>> function, eme_success, which can execute a variety of different test cases
>> based on a config object passed in. There are currently only four:
>> temporary / persistent-usage-record with different ordering of setMediaKeys
>> and setting video.src, but it is easy to add more with different initData
>> approaches, different media formats and different keysystems.
>>
>> What approach do we want to take ? The Blink approach of a different file
>> for every individual case will bloat as we add different session types,
>> initData types, media formats and keysystems.
>>
>> On the other hand, each of the Blink test cases is very straightforward
>> to follow, whereas the combined one is less so.
>>
>> My branch also includes some mp4 test content, the key for which is in
>> the clearkeysuccess.html file.
>>
>> ...Mark
>>
>>
>>
>>
>>
>

Received on Thursday, 23 June 2016 16:35:56 UTC