- From: David Dorwin <ddorwin@google.com>
- Date: Thu, 23 Jun 2016 00:01:08 -0700
- To: Mark Watson <watsonm@netflix.com>
- Cc: public-hme-editors@w3.org, Francois Daoust <fd@w3.org>, Philippe Le Hégaret <plh@w3.org>, John Rummell <jrummell@google.com>
- Message-ID: <CAHD2rsiVkAdAb5VdNSsoDcbb3JDFM-sMtRbJLDHfU4Q6jGDEHg@mail.gmail.com>
For Blink, we tried to follow our understanding of the WPT style, which was that each test case be a separate file. In some cases, especially the syntax tests, there are multiple categories of tests together. I think readability is also important, even if it means duplication. (Of course, API changes or refactorings can be monotonous when they have to be applied to many files, but that should be rarer now.) As to which approach we take for new tests, I defer to the WPT experts. I think we probably do want individual tests for various media types, etc. For example, downstream users (i.e. user agent vendors) should be able to say "I know I don't support foo, so all the "-foo.html" tests are expected to fail. For tests that aren't specifically about a type (or key system), the tests should select a supported one and execute the tests. Ideally, it would be possible to force such tests to run all supported variants. For example, Chrome might want to run the tests with both MP4 and WebM. encrypted-media-syntax.html, for example, tries both WebM and/or CENC types based on whether they are supported, requires all supported to pass, and ensures that at least one was run. This has the advantage of testing both paths when supported, though it's not verifiable anywhere that both ran. I don't know whether it would be useful to be able to say run all the tests with WebM then repeat with CENC. Regarding the test content, it would be nice to use a common set of keys across all the tests and formats. This will simplify utility functions, license servers, debuggin, etc. Also, we may want to keep the test files small. David On Tue, Jun 21, 2016 at 9:16 PM, Mark Watson <watsonm@netflix.com> wrote: > All, > > I have uploaded some additional EME test cases here: > https://github.com/mwatson2/web-platform-tests/tree/clearkey-success/encrypted-media > > I have not created a pull request, because there is overlap with the Blink > tests. > > I have taken a slightly different approach, which is to define one > function, eme_success, which can execute a variety of different test cases > based on a config object passed in. There are currently only four: > temporary / persistent-usage-record with different ordering of setMediaKeys > and setting video.src, but it is easy to add more with different initData > approaches, different media formats and different keysystems. > > What approach do we want to take ? The Blink approach of a different file > for every individual case will bloat as we add different session types, > initData types, media formats and keysystems. > > On the other hand, each of the Blink test cases is very straightforward to > follow, whereas the combined one is less so. > > My branch also includes some mp4 test content, the key for which is in the > clearkeysuccess.html file. > > ...Mark > > > > >
Received on Thursday, 23 June 2016 07:01:58 UTC