- From: Benjamin Schaaf <ben.schaaf@gmail.com>
- Date: Thu, 16 Feb 2017 11:29:22 +1100
- To: Simon Pieters <simonp@opera.com>
- Cc: public-test-infra@w3.org
I'm not really sure how we can tackle the user-agent defined margins for rendering. Perhaps an easy way to change it automatically throughout the tests would help? How do other tests handle this? As for categorization, I just tested the help link you suggested and it doesn't seem to influence test outcome. So producing a report would involve parsing all the tests themselves (only a bit more work). But a categories.json may still be useful for more high-level reports, eg: "This browser passes 97% of the parsing tests for cues, but 0% of the parsing tests for regions" On Wed, Feb 15, 2017 at 11:23 PM, Simon Pieters <simonp@opera.com> wrote: > On Tue, 14 Feb 2017 08:41:15 +0100, Benjamin Schaaf <ben.schaaf@gmail.com> > wrote: > >> ---------- Forwarded message ---------- >> From: Benjamin Schaaf <ben.schaaf@gmail.com> >> Date: Tue, Feb 14, 2017 at 12:48 PM >> Subject: Test Changes Proposal >> To: David Singer <singer@apple.com>, Simon Pieters <simonp@opera.com>, >> Silvia Pfeiffer <silviapfeiffer1@gmail.com> >> >> >> Hello, >> >> I've put together a proposal for a way to reorganise webvtt's >> web-platform-tests to make writing and reading tests easier as well as >> add categorization. >> I'd like to get input on this before I start working on it. >> >> Thanks, >> Benjamin Schaaf >> >> >> # WebVTT web-platform-tests changes proposal >> >> A lot of the way tests are currently set up is limited in some manner, eg. >> `webvtt-file-parsing tests` can only check direct attributes of webvtt >> cues and >> the number of cues generated. Instead of building on top of the current >> tests, >> I'd like to rewrite the way the current tests are run so we can more >> easily >> write tests. >> >> For `webvtt-file-parsing` I'd like to generate tests from a template that >> includes a wevtt file and some js assertions given the video object. > > > This sounds OK. > >> For `webvtt-cue-text-parsing-rules` I'd like to change the format to be >> more >> easily readable and clean up buildtests.py (or rewrite). > > > The format in the .dat files are the same as the tree-builder tests in > html5lib-test, which are used for testing the HTML parser. But I'm certainly > open for using something else here. > >> I think the api tests are generally fine as they are (in terms of the way >> tests >> are run). >> >> I haven't dived deep enough into rendering tests to come up with anything >> concrete, but there should be a nice way to combine a webvtt file, the >> test file >> and a ref file into a more easily writeable/readable single-file format >> that we >> can generate the rest from. > > > Yes, there is room for improvement in the rendering tests, but also some > challenges that I'm not sure how to address. In particular the specification > allows for a UA-defined margin (to dodge overscan or just to make it look > better), but the tests do not account for this. Also Safari has quite > different default rendering than what the specification requires. Chromium > also has some default padding on the cue background box, I think, which > causes many tests to fail. > > >> ## Directory Structure >> >> I'd also like to clean up the directory structure. I'm not sure if any >> deeper >> directories are needed >> >> webvtt/ >> api/ >> VTTCue/ >> VTTRegion/ >> parsing/ >> file-parsing/ >> cue-text-parsing/ >> rendering/ >> TBD > > > As James said this will result in some work for vendors to update test > expectations, but I think many tests need some updates anyway and there is > missing coverage, that we can do this. > > There is some documentation about directory structure, though the > documentation itself is in the process of being moved... > > Old docs: > http://testthewebforward.org/docs/test-format-guidelines.html#test-locations > > New docs (will soon stop existing at this location per gsnedders): > https://gsnedders.github.io/wpt-docs/writing-tests/general-guidelines.html#file-paths-and-names > > >> ## Categorization >> >> I propose we categorize tests by keeping track of the category of a test >> separately (eg. in a `categories.json` file). We can then use the json >> output of >> the test runner to parse which categories/parts of the spec are well >> supported. >> >> The tests for `2dcontext` do something similar. > > > `<link rel="help" href="(spec link with fragid)">` in each test is a way in > web-platform-tests to provide a mapping to which section of which spec a > test is testing. (And fallback to directory structure; I think the ideas is > that first directory somehow maps to some spec and the last directory maps > to an id within that spec.) > > Is a categories.json file still helpful if there are <link>s? > > > Thank you! > > -- > Simon Pieters > Opera Software
Received on Thursday, 16 February 2017 00:29:56 UTC