Re: which version is correct?

On 16-06-11 05:05, Frost, Jon A. wrote:
> My opinion is that at some point there should be specific guidance regarding the visual output.  At least for common sets of SVG code such as "basic" filter effects.  The test suites seem to do this to an extent with the PNG image comparisons, but we should eventually look into adding a more detailed and more automated visual QA process as part of the SVG unit tests.
> 
> I have only heard of a few such systems over the years.  Just as an example, this is a more recent one:
> http://www.sencha.com/forum/showthread.php?117974-Can-Visual-QA-integrate-with-my-CI-processes
> ...

This actually sounds a bit like Inkscape's rendering tests, and there
are a few problems with this approach. First of all, the current tests
have way too many text to yield reproducible results across different
machines, so for Inkscape I wrote a script that removed all text.
However, this obviously does not help with tests for text-related features.

Also, it is often not that easy to judge whether output is correct, as
problems can be subtle and the specification does not specify any
tolerances. The last problem is especially problematic when rendering
shapes. For things like fills and filters one could imagine that a
maximum error of 1 or 2 levels (assuming 8bit colors) might be
acceptable, but if a shape is slightly off it can vary much more, so
would that be acceptable? And I think the specification also does not
specify where to put the center of pixels (does the upper left pixel
describe the area from (0,0)-(1,1), sampling at (0.5,0.5) or is it the
value at (0,0)), nor does it specify exactly how certain rendering hints
should be interpreted.

If you want to compare files automatically, the above means that exact
equality with a reference is probably not enough, so then the question
is how to define a suitable "approximate" equality. For Inkscape's
rendering tests I myself used some tool specifically meant for gauging
how visible differences are, but it's definitely not perfect.

Furthermore, many of the present tests "test too much". For example,
most filters just use the default of linearRGB for example (so not
supporting linearRGB immediately makes an implementation fail pretty
much all filter tests if you were to take a very strict approach). And
as far as I know there is currently no fully conforming implementation
anyway (at least not to the precision that would be required) so it is
virtually impossible to create true reference outputs. And that is apart
from the fact that deciding whether you have a fully conforming
implementation is a bit of a chicken and egg problem.

So far the best way I have found of dealing with most of the above is to
create so-called "patch" files where possible, so simpler SVG files that
should look the same. However, this approach has a few problems of its
own, especially if you were to apply it to a general SVG test suite. In
particular, with Inkscape's tests the patch generally makes use of (my)
knowledge about what Inkscape does and does not do correctly, and it
might be hard to create "patch" files for all tests.

However, I do believe the "patch file" approach to be the easiest route
to automated rendering tests. To make them useful for interoperability
tests it would be necessary to agree upon some subset of SVG that most
renderers will have no trouble with, yet is powerful enough to describe
the output of most tests. That subset could then be tested separately,
which is hopefully easier (either by creating reference files or by
deciding manually). For certain filters this is quite tricky, but I
guess one could try to rely on high resolution png renderings (obtained
through some trusted method).

Received on Thursday, 16 June 2011 10:36:55 UTC