- From: Shadi Abou-Zahra <shadi@w3.org>
- Date: Thu, 14 Apr 2005 16:20:37 +0200
- To: <public-wai-ert@w3.org>
Hi, > A question in this case, is if the tool should have the > possibility to report a fourth value in this case (I.e. > "#DontKnow") The current spec defines <earl:cannotTell> as the fourth value. Is this what you mean? > or if this should be indicated with the tool returning > "#ManualInspectionNeeded" This is subjective. One tool may require manual inspection while another tool may be able to automatically evaluate the checkpoint. When should this value be used? > or nothing, indicating implicitly by returning nothing for > this test case? This would typically have to be done if the > confidence value is not used. Not very clean solution in my opinion. No answer is not always an answer :) I'm also optimistic we can find a solution... > Modelled from this one could maybe have something like: > > <earl:accurancy unit='percent' confidence='0.9'/> The problem is not really how the properties look like. Quite a couple of models have been suggested just recently on the list and they all seem to have pros and cons to them. To me, the problem is really how to derive values such as "high", "low", "30%", ".5", or "0.9". In my opinion, if there isn't a clear way of how to unambiguously calculate a value, then whatever property we come up with will not be interoperable between the tools and therefore will not be used (as the experience from the current spec shows). Do we have any ideas what type of criteria we can use to calculate confidence values? Here are a couple to think about: * "type/category" of test being executed * automatic vs. manual evaluation of the test * results (+confidence?) from other related tests * "precision" of the assertor for that specific test Best, Shadi
Received on Thursday, 14 April 2005 14:20:38 UTC