Re: Additional EME tests

Hi David,

I've been wondering about the same things for the MSE test suite. Some 
comments inline.


Le 23/06/2016 09:01, David Dorwin a écrit :
> For Blink, we tried to follow our understanding of the WPT style, which
> was that each test case be a separate file. In some cases, especially
> the syntax tests, there are multiple categories of tests together. I
> think readability is also important, even if it means duplication. (Of
> course, API changes or refactorings can be monotonous when they have to
> be applied to many files, but that should be rarer now.) As to which
> approach we take for new tests, I defer to the WPT experts.

I don't qualify as WPT expert, but my understanding is that it is 
somewhat up to the people who write and review the tests. In the MSE 
test suite, a given test file often checks a particular algorithm and 
contains multiple test cases to check the different steps. I personally 
find that approach useful and readable as well.


> I think we probably do want individual tests for various media types,
> etc. For example, downstream users (i.e. user agent vendors) should be
> able to say "I know I don't support foo, so all the "-foo.html" tests
> are expected to fail. For tests that aren't specifically about a type
> (or key system), the tests should select a supported one and execute the
> tests.

I quickly glanced at the HTML test suite for media elements to see how 
tests were written there:
https://github.com/w3c/web-platform-tests/tree/master/html/semantics/embedded-content/media-elements

Most test files seem to pick up a supported MIME type, using common 
functions defined in:
https://github.com/w3c/web-platform-tests/blob/master/common/media.js

There are exceptions to the rule, such as tests on the "canPlayType" 
method that contain test cases explicitly marked as "(optional)":
http://w3c-test.org/html/semantics/embedded-content/media-elements/mime-types/canPlayType.html

For MSE, most tests can be written without having to impose a particular 
MIME type (with a few exceptions as well, e.g. to test the "generate 
timestamps flag"), and it seems a good idea to keep the number of 
MIME-type specific tests minimal to improve the readability of the 
implementation report. Whenever possible, we need the MIME-agnostic 
version of the tests to assess the "at least two PASS" condition in the 
report.


> Ideally, it would be possible to force such tests to run all supported
> variants. For example, Chrome might want to run the tests with both MP4
> and WebM. encrypted-media-syntax.html, for example, tries both WebM
> and/or CENC types based on whether they are supported, requires all
> supported to pass, and ensures that at least one was run. This has the
> advantage of testing both paths when supported, though it's not
> verifiable anywhere that both ran. I don't know whether it would be
> useful to be able to say run all the tests with WebM then repeat with CENC.

I've been wondering about that as well for MSE tests. Passing a test for 
a given MIME type does not necessarily imply that the test also passes 
if another supported MIME type gets used. It would make tests harder to 
write though (more error-prone, harder to debug, and slightly harder for 
user agent vendors to tell what failed in practice). It's often easier 
to create one test case per variant.

In the end, what could perhaps work is to create a 
"createGenericAndVariantTests" method which takes a list of variants as 
input, replaces the usual calls to "test" or "async_test", and generates 
a generic test case that picks up the first supported variant together 
with a set of variant test cases marked as optional that test the same 
thing for each and every variant.

The generic test case would give the result needed for the 
implementation report. The additional optional test cases could help 
user agent vendors detect additional issues with a particular variant 
and such tests should be easy to filter out from the implementation 
report as needed if they are consistently flagged with "(optional)".

Francois.

>
> Regarding the test content, it would be nice to use a common set of keys
> across all the tests and formats. This will simplify utility functions,
> license servers, debuggin, etc. Also, we may want to keep the test files
> small.
>
> David
>
> On Tue, Jun 21, 2016 at 9:16 PM, Mark Watson <watsonm@netflix.com
> <mailto:watsonm@netflix.com>> wrote:
>
>     All,
>
>     I have uploaded some additional EME test cases here:
>     https://github.com/mwatson2/web-platform-tests/tree/clearkey-success/encrypted-media
>
>     I have not created a pull request, because there is overlap with the
>     Blink tests.
>
>     I have taken a slightly different approach, which is to define one
>     function, eme_success, which can execute a variety of different test
>     cases based on a config object passed in. There are currently only
>     four: temporary / persistent-usage-record with different ordering of
>     setMediaKeys and setting video.src, but it is easy to add more with
>     different initData approaches, different media formats and different
>     keysystems.
>
>     What approach do we want to take ? The Blink approach of a different
>     file for every individual case will bloat as we add different
>     session types, initData types, media formats and keysystems.
>
>     On the other hand, each of the Blink test cases is very
>     straightforward to follow, whereas the combined one is less so.
>
>     My branch also includes some mp4 test content, the key for which is
>     in the clearkeysuccess.html file.
>
>     ...Mark
>
>
>
>
>

Received on Thursday, 23 June 2016 08:49:26 UTC