Re: Editor's draft comments

On Thu, 15 Mar 2007 01:06:43 +1100, Shadi Abou-Zahra <shadi@w3.org> wrote:

> SEMIAUTO: tool uses human(s) to make a decision. For example, a tool
> asks a human to verify that a given table is a layout vs data table, and
> based upon this input (and other indicators) decides the outcome of a
> test criterion.

In particular, what is different in this scenario is that the tool has presented the question in some particular way, breaking it down into parts and asking a question that is not the original one in the spec or test case. This is equivalent to having the actual test as the tool presents it to the user be described as a "subtyped version" of some other known test - so the way that Hera asks about some aspects of WCAG 1.1 are listed as Hera tests, done manually, and sub-typed as tests of WCAG 1.1

This seems to require a fairly high degree of complexity however. I am not sure that it is justified. Semiauto is a simple shorthand.

On the other hand, where the tool simply presents a test as written, the result is not semi-auto. The tool didn't guide the user to a solution, just asked the user to decide (ergo manually) about the actual test.

cheers

Chaals

> Carlos Iglesias wrote:
>>> Because the _decision_ is done by a human? Then that's the
>>> criterion for manual vs. semiauto.
>>
>> IMO Yes.
>>
>>
>>> Now, what's a scenario for semiauto?
>>
>> I have never believed in the semiauto mode, my memories are that the arguments in favour where something like there are some assistant tools where the human has a secondary role.
>>
>> IMO once the tool need a human to decide (even if it's just a "Yes to all" thing) it's a manual result, otherwise it's automatic (or mixed/unknow)


-- 
  Charles McCathieNevile, Opera Software: Standards Group
  hablo español  -  je parle français  -  jeg lærer norsk
chaals@opera.com          Try Opera 9.1     http://opera.com

Received on Wednesday, 14 March 2007 15:59:06 UTC