- From: SULLIVAN, BRYAN L <bs3131@att.com>
- Date: Wed, 23 Jan 2013 15:21:51 +0000
- To: Tobie Langel <tobie@fb.com>, Dominique Hazael-Massieux <dom@w3.org>
- CC: Arthur Barstow <art.barstow@nokia.com>, "public-coremob@w3.org" <public-coremob@w3.org>
I figured someone would have already done a similar analysis, but didn't know where to look. That's why I went ahead and put this on the wiki. I think we should develop automated updates to such an analysis, but publish a periodic (monthly or quarterly) report in the meantime. I am willing to help drive that effort. The "done-ness" of tests is a key question. As I did the analysis, I had to take various kinds of generalizing and summarizing actions that roll up the single numbers, and I know for sure that the value of the single numbers is low until we understand and validate what contributes to them. With so much variance in what's placed where and how it's identified in the lifecycle of tests, I found it very difficult to get more than a thumbnail sketch of the numbers. At least the ones with "0" were easier, if an unwelcome discovery! I think useful short-term actions are to: 1) take a closer look at each suite, talk to the champion/contributors, and document their practice 2) break out the test numbers in terms of the lifecycle stage e.g. a) submitted (note that it's unclear where tests of different authors may be duplicates) b) approved (meaning hopefully something consistent, e.g. reviewed and validated through execution with at least one UA, but optimally all major UAs, at least to validate that no test design errors are causing test failure with any UA) c) active (meaning the tests have been incorporated into the test framework 3) Add numbers indicating the nature of the tester experience: a) manual tests b) automated tests All of this is focused on aligning processes, identifying where resources are needed, and generally providing a more useful guide to what tests are available and in what form. Thanks, Bryan Sullivan -----Original Message----- From: Tobie Langel [mailto:tobie@fb.com] Sent: Tuesday, January 22, 2013 8:29 AM To: Dominique Hazael-Massieux Cc: Arthur Barstow; SULLIVAN, BRYAN L; public-coremob@w3.org Subject: Re: Next steps for Coremob-2012 and the group. On 1/22/13 5:23 PM, "Dominique Hazael-Massieux" <dom@w3.org> wrote: >Le mardi 22 janvier 2013 à 16:18 +0000, Tobie Langel a écrit : >> >Le mardi 22 janvier 2013 à 10:26 -0500, Arthur Barstow a écrit : >> >> Re the data in the "W3C Test Framework" column of the "CoreMob 2012" >> >> Tab, the data is an absolute number of tests. In addition to that >> >> number, I think it would also be useful to get a sense of the level >>of >> >> the test suite's "done-ness". For example: the test suite is >> >> sufficiently complete to test CR exit criteria, the test suite is >> >> partially complete, contributions needed, etc. >> > >> >FWIW, I've tried to do that analysis as part of my "state of mobile web >> >standards" document http://www.w3.org/2012/11/mobile-web-app-state/ , >> >marking the test suites coverage on a mostly-guess basis. I would love >> >to instead get that info from the groups themselves :) >> >> Or better yet, automate the whole process. > >That would be much better indeed, but requires a significantly bigger >investment too :) Agreed. Yet there's a tipping point where automation becomes cheaper.
Received on Wednesday, 23 January 2013 15:24:00 UTC