Re: Test Results Organisation (sic)

On Wed, 8 Oct 2003, Jim Hendler wrote:

>
> At 12:48 AM -0400 10/8/03, Sandro Hawke wrote:
> >In response to frequest requests, I have changed the Tests-of-Interest
> >section to just list the tests grouped by the number of systems which
> >pass.
> >
> >      -- sandro
>
>
> THis is helpful for one of the things we need to do with this
> (determine the overall status of our tests and which are being
> passed), but it is less helpful from another perspective -- it would
> help me in writing the PR request to be able to say which, if any,
> system(s) "Passed every Lite test,"  "Passed every DL test," and
> (wouldn't it be wonderful) "Passed every test.."  more realistically,
> I'd love to be able to say "System1 passed 92% of all Lite tests,"
> "System 2 passed 86% of all DL tests" etc. (and getting 80% of some
> of these is CR exit criterion) -- so what would really help me (and I
> think a number of other people have indicated wanting it as well) is
> if we had sections sorting the tests by OWL Subset (Lite, DL but not
> Lite, Full but not DL or Lite) and how the various systems did on
> those.  I don't know how hard that would be to do -- but if not too
> hard, would sure help me as chair (as well as being useful for
> informing the world how various systems do overall)

Jim

This is exactly what I was thinking of. Sandro was I think, worried about
us ending up with lots of tiny tables (something like
Consistency-Approved-DL-NotLite, Entailment-Proposed-Full-NotDL-NotLite,
Inconsistent-ExtraCredit-DoneOnATuesday etc. etc.). Perhaps the ability to
display a "parameterised view" of the results would avoid this - like the
way that the mailing list archive works (show by thread, show
by author etc).

	Sean

-- 
Sean Bechhofer
seanb@cs.man.ac.uk
http://www.cs.man.ac.uk/~seanb

Received on Wednesday, 8 October 2003 07:52:57 UTC