Re: Revised Guidelines and Checkpoints for TestGL

I agree with your "informal" summary of this checkpoint (I'll try to 
work this into the rationale). The problem with your proposed solution 
(using the term test harness) is that there may not be a test harness. 
We use the term "test execution process", and this process may or may 
not be automated. Only if it is will there be a "harness". Moreover, if 
there is, it will not be able to report status consistently unless the 
tests themselves do so.

Maybe we need to split this into two checkpoints: tests must report 
status consistently, and the test execution process, whether or not 
automated, must capture, summarize, pass on those statuses. This might 
be appropriate since typically tests and the execution process/harness 
will be defined/created by different people.

david_marston@us.ibm.com wrote:

>>>Checkpoint 6.1 Tests should report their status in a consistent
>>>manner [Priority 1]
>>>Conformance requirements: Tests must report their execution status in
>>>a consistent manner. At a minumum, tests should report whether they
>>>passed, failed, or whether the results were inconclusive.
>>>      
>>>
>
>DHM>What about speaking about the test suite (test harness?) rather than
>DHM>tests? (id for 6.2). 'Tests' seems too fuzzy IMO. Something like:
>DHM>"The test harness must report the execution status of the tests in
>DHM>a consistent manner".
>
>If we step back and think about what is wanted, I think the informal
>statement is:
>Regardless of what it is that runs the tests, and regardless of how
>many cases in the suite are run, the outcome of each individual test
>must be reported in a consistent manner.
>
>From that, we can derive that the test harness (if it both runs tests
>and checks results against the "correct" reference results) should
>support consistency of reporting. For something like SVG, the harness
>may just run all the test cases and present results to a human judge
>in a systematic way, and it should also provide a consistent way for
>the human to characterize the outcome. This might be a set of buttons
>labeled "Pass", "Fail", and "CannotTell", if all the other outcomes
>were dealt with by automation before that point. That's as much as
>the harness can do to "support consistency of reporting" when a human
>makes the call.
>.................David Marston
>
>  
>

Received on Saturday, 23 August 2003 13:18:37 UTC