W3C home > Mailing lists > Public > public-svg-wg@w3.org > April to June 2012

Re: Minutes, 20 April SVG telcon

From: Brian Birtles <bbirtles@mozilla.com>
Date: Fri, 20 Apr 2012 11:27:57 +0900
Message-ID: <4F90C9AD.7010800@mozilla.com>
To: public-svg-wg@w3.org
Hi all,

Thanks for discussing the test content guidelines subject I brought up.

I'll follow-up inline with a few comments.

(2012/04/20 7:40), Chris Lilley wrote:
 > test content guidelines
 >
 >     heycam: brian was aksing for guidelines so they can succeed or
 >     fail in the same way like all green rectangles
 >
 >     tav: like red and green but if they all look the same it is
 >     confusing
 >     ... doesnt say what is being tested
 >     ... dont want a rect that covers the whole thing
 >
 >     heycam: if the tests are automated you won't need to look at
 >     them
 >
 >     Tav: so there is no debugging help

For most tests, I've found it's not needed. Generally you should focus 
your tests so they test one thing.

For cases where there are a number of possible failure scenarios you can 
have tests which flood the viewport with green on success, purple on 
failure scenario 1, orange on failure scenario 2 etc.

That still gives you the advantages of:

(1) Easier manual inspection of individual tests (the current test suite 
is particularly weak in this regard since the success condition differs 
from test to test and is often very complex)
(2) Easier generation of reference images (a no-op)
(3) No edges that give pixel differences due to anti-aliasing
(4) Quicker feedback when running automated suites (as soon as you see 
something other than green on the screen you know you've got issues)
(5) Fewer resources required (in terms of number of files, number of 
renders required etc.)
(6) Easier to write tests (less inventiveness required)

 >     Cyril: point is to automate error detection

It helps with this, but I think the advantages for manual inspection of 
test results are greater.

 >     heycam: for animation, brian suggests tests where the final
 >     state is at one second so there is a snapshot to compare to the
 >     ref
 >     ... script can sett the time to that point

I think it would be good to standardise the snapshot time where 
possible. There will, of course, be cases where we deviate, but having a 
standard time makes understanding the tests and debugging simpler if you 
know that generally, for example, t=5s is the key moment.

 >     ChrisL: for path animations you need multiple snapshots surely

For most animation reftests I've found one snapshot is sufficient. For 
cases where you actually want to test values over time I've found it 
more efficient to use a purely scripted test where you repeatedly seek 
the timeline and query the values you're interested in. Generally, 
there's no need to render the whole scene multiple times since it's just 
one or two values that you care about.

For non-scripted UAs if you design the tests to test one specific thing 
each (rather than just a series of samples of an animation that are 
effectively testing the same things) then I think the number of cases 
where you actually want multiple snapshots of the same animation will be 
small. I think. :)

 >     tav: examples of animation reftests?

There are hundreds here:
http://mxr.mozilla.org/mozilla-central/source/layout/reftests/svg/smil/

They all use script to set the snapshot time. It would be good to just 
declare that in markup so non-scripted UAs can run the tests.

 >     heycam: ok so aiming for a single green or red rect is not
 >     good, but if it is a simple pass/fail result then go for the
 >     rect

I'm less concerned about the red flood fill. For transforms, for 
example, you could fill the canvas with red, then transform a green rect 
so that it should fill the canvas.

There will *definitely* be some cases where you don't want the green 
flood fill, but, depending on the section of the spec, I think there 
will not be too many. Have a look at:

http://mxr.mozilla.org/mozilla-central/source/layout/reftests/svg/smil/reftest.list

On the right hand side you see "anim-standard-ref.svg" and "lime.svg" 
over and over again. Go into the subfolders and it's the same, e.g. the 
'syncbase' folder uses "green-box-ref.svg/xhtml" as the reference image 
for all 85 tests.

The fact that there are multiple standard reference images, 
"anim-standard-ref.svg", "lime.svg", "green-box-ref.svg" etc. is 
something I want to avoid in the SVG test suite and one of the reasons I 
brought this topic up. We should just have green.svg.

 >     ChrisL: sometimes you can loose track if all the tests look the
 >     same
 >
 >     (general agreement)

I don't understand this. Can someone explain?

I expect the SVG 2 test suite will contain thousands and thousands of 
test files. In that case, you keep track of them by name, not 
appearance. But perhaps I've missed the point here.


I don't want a hard rule about "you must have success = green flood 
fill" or anything like that. I'm just trying to avoid:

(1) The current SVG test suites where it's really hard to tell at a 
glance if a test has passed.
(2) The situation we got into with Gecko where we have a number of 
"standard" reference files.


Regards,

Brian Birtles
Received on Friday, 20 April 2012 02:28:29 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 20 April 2012 02:28:29 GMT