W3C home > Mailing lists > Public > public-html-testsuite@w3.org > November 2010

Re: JPEG Quality test

From: Philip Taylor <pjt47@cam.ac.uk>
Date: Mon, 29 Nov 2010 20:45:54 +0000
Message-ID: <4CF41102.3050404@cam.ac.uk>
To: Jakob Nilsson-Ehle <jnehle@gmail.com>
CC: James Graham <jgraham@opera.com>, Kris Krueger <krisk@microsoft.com>, "public-html-testsuite@w3.org" <public-html-testsuite@w3.org>
Jakob Nilsson-Ehle wrote:
> Right, I know it is not required by the specification, but these tests
> are still there to test JPEG encoding, and passing someone for not
> supporting JPEG is misleading.
> 
>  If the point of the test suite is to test only what is required, then
> having them at all is nonsensical. And if that's not the point, then
> having them pass on something they don't support (even though it's not
> required) is misleading.

The main point when I wrote the tests was to find bugs in browsers. If a 
browser doesn't implement JPEG compression then that's not a bug (since 
the spec allows it), so it's misleading for the test to return the same 
result as if there really was a bug. Since the only possible results are 
"pass" and "fail", that means it should pass.

> Running them right
> now in, for example Chrome, just gives the false impression that jpeg
> is supported. It wasn't until I debugged them that I realized it
> wasn't

Indicating browser support isn't the point of the tests, and I think 
they're a poor way of achieving that - even with non-optional features 
it's hard to tell the difference between a large group of tests that 
fail because of a single obscure edge case, and a large group that fail 
because the browser is fundamentally broken. I think it would be much 
more useful for someone to manually maintain a compatibility table, 
describing features in the way that authors will think of them (rather 
than in the low-level way that the test cases categorise them), and then 
not worry about the tests giving a misleading impression.

> To me, personally, it would make more sense to have that first
> if-statement present in all the tests into a seperate  test, like
> toDataURL.jpeg.nonsupported that would check it jpeg fallbacks
> correctly to PNG if not supported (much like the "unrecognized" test),
> and then maybe also having a general case that just checks if jpeg is
> supported. If those tests are present, then failing the JPEG tests if
> jpeg is not supported would still provide the same information, but
> also properly explain that jpeg is not supported.

Sounds like the idea is perhaps to change the result set into "pass", 
"fail", and "fail but not a bug because this other test says the 
optional feature wasn't supported". That seems a lot more complex than 
the current approach, and I'm not currently convinced the complexity is 
worthwhile, particularly since this is about the only widely-implemented 
optional feature in the canvas API. (Some other features were originally 
optional but then got changed to mandatory; maybe the same could be done 
for JPEG support if everyone plans on implementing it anyway, and then 
we wouldn't have to worry about this at all.)

-- 
Philip Taylor
pjt47@cam.ac.uk
Received on Monday, 29 November 2010 20:46:25 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 15:49:37 UTC