Handling slight differences in anti aliasing for SVG 2 test suite

Hi all,

I’m looking to gather some wisdom and advice from people who have more experience with ref tests than myself.

It’s a common problem that reference images may be very slightly different than the renderings for tests. This seems to be compounded with the ref test model for SVG.

To give an example (ignore 2b):

These are some marker tests from SVG 1.1, converted to Web Platform Test format. They fail under Webkit.
Because of the various transforms that markers go through, it becomes very difficult to create a test that produces an output that is easy to reproduce in the reference. We end up with very slight differences in the user space co-ordinate values and this results in anti-aliasing differences.

My gut feeling is that this is going to be very hard to avoid in the SVG 2 test suite for anything other than very simple tests.
But browsers must have come up against this issue with SVG before, so what do you do?

I can see a few possible options:
1. A dumb tolerance of x% of pixels may be different (yuk)
2. A smarter image diff that allows tolerance only within 1px of object boundaries
3. When testing multiple levels of transform, apply a filter to each SVG test to make it 1bpp, with any value below one going to zero
4. Try harder to craft tests that always match 100% (as I said above, I think this is going to be very difficult)

What do others think? I’d love some input on this.
The information contained in this email message and any attachments may be confidential and may also be the subject to legal professional privilege. If you are not the intended recipient, any use, interference with, disclosure or copying of this material is unauthorised and prohibited. If you have received this email in error, please immediately advise the sender by return email and delete the information from your system.

Received on Monday, 28 November 2016 23:31:55 UTC