Testing (was: Re: Minutes, 20 October 2016 SVG WG telcon)

Thoughts below…

On Fri, Oct 21, 2016 at 12:35 AM, Nikos Andronikos
<Nikos.Andronikos@cisra.canon.com.au> wrote:
> Testing
>    [11]https://github.com/w3c/svgwg/wiki/Testing
>      [11] https://github.com/w3c/svgwg/wiki/Testing
>    <AmeliaBR> scribenick: AmeliaBR
>    Nikos: I created this wiki (link above) to describe the main
>    process for how to create tests.
>    ... We've mostly agreed to use web-platform-test, but it is
>    very web-browser specific, we may need to make adjustments for
>    other user agents like Inkscape.
>    Tav: I think it's straightforward so long as the reference is
>    either SVG or PNG.
>    Nikos: PNG has issues, because then we'd probably need separate
>    reference images for each platform.
>    AmeliaBR: Is that really true? If you're using default system
>    fonts, then yes, but for most SVG you should get the same
>    rendering on all platforms.
>    Tav: And for fonts, we can use web fonts to get consistency.

There's absolutely problems with using PNGs as references:
anti-aliasing varies between platforms, which affects anything that
doesn't fit nicely on the pixel grid (and that's common enough in SVG
tests), and especially affects font rendering given we not only have
anti-aliasing differences but also font hinting differences (and that
typically varies between OS releases, too).

All the browsers that currently use PNGs for some references want to
move away from them, and changes invalidating several thousand PNGs
are impossible to manually verify, so you just have to assume nothing
bad changed.

>    AmeliaBR: I'd be more worried about using SVG as a reference.
>    User agents may match the reference rendering on their own
>    system, but still not be cross-compatible because of
>    lower-level rendering issues.

Such broken rendering tends to be immediately apparent as soon as you
try and render one page/image, as it almost invariably implies basic
functionality is broken.

>    Tav: Either way, the may concern is that the references aren't
>    HTML files, since we can't run them to compare.
>    Nikos: You could use some other rendering agent to convert
>    those to PNG references.
>    Tav: That adds a new level of complexity.
>    Nikos: The issue is that web platform tests doesn't give us
>    strict control over what gets added. Anyone who's had a good
>    reputation with the project gets write access to approve tests.

It might be practical to add a requirement that the reference of any
SVG is also an SVG (though obviously there's limits to what's
practical/possible: a scripted SVG could of course just add HTML
elements to it dynamically).

>    Tav: There are many things about it I like. I like being able
>    to click and see the reference image. I like having versioning
>    control.
>    ... One problem is that it is quite a huge repository, and it
>    will get even larger as CSS joins.
>    Nikos: So you don't want to download the whole repo?
>    Tav: The repo itself is not a big problem, but watching for all
>    the issues could be very overwhelming. Having it all in one
>    repo seems not very manageable.
>    ... I expect it's something that's been discussed internally,
>    but for now we'll have to deal with it as best we can.

Almost all PRs are automatically labelled by a bot and most issues are
manually labelled, so
<https://github.com/w3c/web-platform-tests/labels/svg> for example is
all issues for the SVG spec.

>    Nikos: We could make a fork of the repo, and use it to do all
>    the subject-matter discussions, then when we decide on those,
>    push them to the main repo for a final approval of the format.
>    AmeliaBR: I like that idea if we can make it work. Funnel
>    things through a 2-stage pull request.

Note that trying to land huge, large PRs has typically not worked out
well (either for review or for avoiding conflicts).

>    Tav: There's already SVG tests there. Mozilla uploaded a lot in
>    the past few weeks.
>    Nikos: Yes, one of the goals of web-platform-tests is to make
>    it easy for browsers to dump their internal testing suites, to
>    quickly add new tests.
>    Tav: I like the idea of making it easy to add tests. But that
>    doesn't always make it easy to use them.

Given almost all contributed tests are automated, what makes it hard
to make use of them?


Received on Saturday, 22 October 2016 20:54:17 UTC