[blue sky] Re: Good (and Bad) practices of implementation reports

<note
class="lexicon buzzwords">

"blue sky" is my code for a no-holds-barred "if you ran the zoo" 
approach to the problem.
Situation analysis template asks for

- what squeaks: perceived points of pain
- what works: closest approach to desired behavior in the incumbent 
mode of operation
- blue sky: in the best of all possible worlds, how should it work
- baby steps: what low-risk steps could be taken that would make things better
- [optional] proposed action plan -- it is sometimes important to
gather the above from all stakeholders before getting too serious
about an action plan.

</note>

At 12:12 PM +0900 20 02 2007, Karl Dubost wrote:
>Hi Lofton, Snorre, Lynne, Mark, Patrick,
>(and others)
>
>I would like  to have your opinions about Implementation Reports.
>
>
>Short reminder:
>
>    During the CR phase[2], usually it is requested from WGs to prove 
>that their language has been implemented /at least/ twice. Rules can 
>be made stricter by the WG itself. Often WGs produce an 
>implementation report to have a global view of implementations 
>landscape.  The QA Matrix[3] lists W3C implementation reports[1] and 
>shows their diversity in terms of layouts and information.
>
>
>Questions:
>    - What should contain an implementation report?
>    - What shoud NOT contain an implementation report?
>    - Do you think a common format is desirable?

No; interoperable schemas of the weakest form we can tool up.
So, driven by SPARQL queries over the GRDDL of test reports.

All test reports and all implementation reports, in their data
analyses which are rollups of test reports, should GRDDL into an RDF
graph compatible with EARL.  I don't mean trivially compatible but
rather that common senses are recognizable by the use of common or
thesaurus-linked terms.

The "why no standard document format" argument by now has a long
beard, but it bears repeating.

To  build a spec-for-spec, you have to linearize the domain
into a tree.  This involves breakage of the shape of the domain; there
is no one-tree-fits-all coordinate frame for the domain because
the views of different stakeholders care more and less about different
facets/aspects of information.

So a consensus model of the domain is more general in nature than
a hiearchical system of buckets.

Let the stakeholders frame their function and performance concerns in
terms of an E/R graph.  Negotiate metrology with them in the terms of
their queries.

Then validate test plans (before you get too far running the tests)
as to the schema they are logging by doing the query-compile between
the composed concerns/queries of the stakeholders and the proposed
logging schema of the test plan.

That's my blue-sky prescription.  Make an honest woman of the SW
by demonstrating that it copes with the unsolved problem of enforcement.

Better, even common practice at the sourcing end will be beneficial
ultimately, but the classic 'standards' consensus process won't get
there.  Out of experience with the above data-mining pipeline will
come the evidence that can isolate and sell value-added discipline
in test metadata.

Al

>    - Do you have success or bad stories when creating an 
>implementation report?
>
>
>Reference:
>[1] http://esw.w3.org/topic/ImplementationReport
>[2] http://www.w3.org/2004/02/Process-20040205/tr.html#cfi
>[3] http://www.w3.org/QA/TheMatrix
>
>
>Thanks
>
>
>--
>Karl Dubost - http://www.w3.org/People/karl/
>W3C Conformance Manager, QA Activity Lead
>   QA Weblog - http://www.w3.org/QA/
>      *** Be Strict To Be Cool ***

Received on Tuesday, 20 February 2007 18:54:59 UTC