- From: Jeanne Spellman <jeanne@w3.org>
- Date: Thu, 02 Aug 2012 14:54:40 -0400
- CC: public-test-infra@w3.org
I am concerned that this could be problematic when I want to compare results from multiple testers. Because my tests are all manual (and prone to human error) an important step in CR is resolving any discrepancies. If we decide to merge identical results, would I still be able to tell that the test had been completely run by separate testers? How do you envision these test results being displayed? jeanne On 8/2/2012 8:49 AM, bugzilla@jessica.w3.org wrote: > https://www.w3.org/Bugs/Public/show_bug.cgi?id=18468 > > Summary: Identical results should be merged > Product: Testing > Version: unspecified > Platform: PC > OS/Version: All > Status: NEW > Severity: normal > Priority: P3 > Component: Test Framework > AssignedTo: mike@w3.org > ReportedBy: ishida@w3.org > QAContact: dave.null@w3.org > CC: public-test-infra@w3.org > > > Currently, if someone submits test results for the same browser, version and OS > with the same result multiple times (I have several cases in mind), the results > summary tables increment the numbers displayed (ie. 1/./. can become 5/./. > without any additional useful information being actually added. > > Please could we only report unique configurations in the results summaries. > > This could be done by either checking when results are requested, or by not > capturing identical configuration tests in the database (perhaps showing only > the latest one, since date can sometimes be of interest). > -- _______________________________ Jeanne Spellman W3C Web Accessibility Initiative jeanne@w3.org
Received on Thursday, 2 August 2012 18:54:42 UTC