W3C home > Mailing lists > Public > public-css-testsuite@w3.org > July 2007

Re: bugs filed on Firefox

From: Ray Kiddy <ray@ganymede.org>
Date: Mon, 2 Jul 2007 23:34:53 -0700
Message-Id: <42A04553-AE12-4D95-8230-60C8453EA94A@ganymede.org>
To: public-css-testsuite@w3.org

On Jul 2, 2007, at 8:03 PM, fantasai wrote:

> Ray Kiddy wrote:
>> Just FYI, I am starting to file bugs in http:// 
>> bugzilla.mozilla.org based on the tests in http://www.w3.org/Style/ 
>> CSS/Test/CSS2.1/.
>> I was surprised that when I checked for bugs that have "http:// 
>> www.w3.org/Style/CSS/Test/CSS2.1" in the URL field, I found only a  
>> few. I realized, though, that the URL field is not always used in  
>> this way and that bugs may be filed against the test suite, or  
>> because of the test suite, but the description of the bug could  
>> mention only the functionality.
>> It makes sense to me, though, to track bugs found with this test  
>> suite by listing the test case URL in the URL field.
>> I have only filed 3 so far, but there are more than a dozen I am  
>> going to triage. If anyone has thoughts, ideas or suggestions, I  
>> would be interested. Or if you want to comment on the bugs  
>> directly, please do. I am by no means a CSS expert, so any  
>> informed opinions recorded on the bugs may help. I am sure I am  
>> going to hear from the Mozilla guys on some of these, since I have  
>> already seen cases where they disagree on the meanings of some  
>> small things in some other tests.
> Ok, this is a bit off-topic for this group but..
> It's great that you're going through the test suite and noting which
> tests fail. This is useful information. However, the reports you're  
> writing
> are not something a layout developer is going to tackle, because  
> it's not
> clear what the actual problem is. The summary and description of  
> the bug
> report should, ideally, explain what the bug is, not how it  
> manifests itself
> in the outcome of a CSS2.1 test case. The test cases are in many cases
> somewhat convoluted to make it very easy to distinguish a passing  
> test from
> a failing one, but it is not so clear from the test case what,  
> exactly, is
> failing. Looking at your bug reports, I can't tell what the problem  
> is, not
> even roughly. The reports you're filing are at the level of a user  
> reporting
> "This page doesn't work because the navbar is too wide." It  
> requires a lot
> more QA work to get to the point where we have a summary saying  
> "percentage
> widths don't work on absolutely positioned elements with auto  
> margins",
> and that's where a) we can tell whether the bug report is a  
> duplicate or
> not and b) a developer doesn't need to do QA detective work to  
> figure out
> what the impact of the problem is and therefore when and how to fix  
> it.

I understand, I think, where you are coming from. I have been a  
developer for a while, also. Also, I have had about a hundred  
variations of this argument over the years.

A user has a responsibility to explain what they did to cause a  
fault. I accept this. But in this case, the thing that I did that  
caused a fault was that I ran a published test. Doesn't the fact that  
the test is part of the CSS2.1 test suite mean that the user should  
not have to figure out why it is supposed to do what it is supposed  
to do? Doesn't a test identify the fault if it fails? I look at  
something like "percentage widths don't work on absolutely positioned  
elements with auto margins", and I think that if a test of that  
functionality fails, the test should make it clear that this is the  
problem. Why does it make sense for everybody using the test to have  
to figure out what the test designer intended?

> So, ultimately what I'm trying to say is, if your goal is to file  
> useful
> bug reports for Gecko, you have to work a bit at understanding what  
> exactly
> in the test is failing, and see if it's already reported. If it's  
> already
> reported, add the test case URL to the URL field or in a comment;  
> and if
> it's not, *then* file a bug--using your understanding of the  
> underlying
> problem and not the test case's symptoms as the summary.

I am sorry, but I do not think it is necessarily the responsibility  
of someone who is trying to file a bug to see if there are others  
like it. Just because developers do not want to do bug triage, and it  
is not fun for anyone, that does not mean users should. After all, if  
a bug gets filed a bunch of times, should it not get fixed? Instead  
of asking users to do extra work to track down the original bug?

> If you're just going through the test suite and want to contribute the
> pass/fail information you're finding to the Mozilla project, don't  
> start
> filing bugs right away. Start compiling a pass/fail list, and write  
> a post
> to mozilla.dev.quality <http://www.mozilla.org/community/developer- 
> forums.html#dev-quality>
> to say what you're doing and ask how it can become useful input to the
> QA process. Perhaps other people can analyze the failures into bug  
> reports,
> or you could do that yourself later when you're prepared to dive a  
> little
> deeper into the individual test cases.

I am doing this also. I found about 20 to 25 tests where there were  
bugs. I filed the ones I thought were clearer or more easy to isolate.

> (BTW, note that the test suite itself has many bugs, and some of  
> the failures
> may be due to that.)

I suspected that, and it certainly does make it harder to use the  
tests to test against a standard. But this cannot be helped.

>> By the way, I do not intend to write more e-mails like this. I  
>> just wanted to give people a place to look for these if they are  
>> interested. And now I am not just lurking.
> ~fantasai
Received on Tuesday, 3 July 2007 06:35:16 UTC

This archive was generated by hypermail 2.4.0 : Friday, 20 January 2023 19:58:12 UTC