minutes from 23 July 2001

present: chris, sean, harvey, wendy, william, katie

summary then detailed minutes.

summary:
chris has been working on a tool that will generate earl based on the users 
judgement if an evaluation tool appropriately evaluated a test file.  he's 
been working with sean to make it produce results in EARL.  He expects it 
will be available to the working group tomorrow.

sean suggested that he publish chris and his exchanges about EARL to help 
others implement EARL.

chris and josh have been working on the test files and have over 220. they 
should be available to the group next week.

wendy discussed the conversation in wcag wg last week to review the test 
files as a group to see if there was consensus on what conforms or 
not.  this is part of the discussion the WCAG group needs to have to 
determine mininum requirements. will also look at real sites, but a review 
by wcag would be good. chris agreed that it would be good to have others 
look at.

we wondered about where to store the results from the tool and what to do 
with them.  wendy feels the results should be used to create a comparison 
chart of tools and that therefore the results should be stored publicly 
somewhere.  using xslt we can convert the EARL results into an xhtml 
comparison chart/table.  Sean is willing to help with the xslt.  wendy will 
look into w3c hosting the test files, cvs for the tool, and host the 
results of the tests/tool.


detailed minutes:

chris: i have a buggy version working. got msg from sean today. should have 
something to send to the list tomorrow.

chris: josh and I are up to 220 test files. carry on this week. will have 
fixed up by the end of the week.

wc: wcag test files

hb: main purpose is coverage in testing tools. if you both find all the 
same errors, be a major step.  have you opened it up to others to 
contribute test files?

cr: we want to, but want to get a basic start.

hb: areas you have not covered?

cr: site content should be appropriate for the reader. hard to make up test 
files.  there are open issues in AERT that we coudln't write test files for.

hb: include identification of the checkpoints that you think are covered 
and those you need help with?

cr: all test files names per the AERT techniques.  therefore, if no name 
with 5.1 then not covered.

primer:
wc: sbp combine ag, wl, and sbp drafts.

cr: confused by n3 notation. made more sense to use rdf.

wl: do you read the bible in hebrew?

/* laughter */

sbp my experience: if you send the example in n3, most will read, but 
others will want xml. if you can read rdf, that's great.

cr: perhaps since my program outputs rdf that's what i'm interested in.

wl: but, you don't read that. it gets transformed.

sbp: i have difficulties reading xml-rdf.

cr: it makes sense to me.  otherwise, most of info I needed. other things 
go through with sbp.

wl: what does the interface do?

cr: dialog box, shows list of test files, mark as either pass or fail in 
regards to accessibility tool.  it's simple. if you put in bobby. put in 
"bobby" run test file through test file, then mark as passed or failed. 
then writes that to a file.

sbp: version box? could use that instead of date.

cr: i think that the date the person evalutes is not important.  "cr says 
on 20 july that bobby passed test." if I read on 30th and say passed...

sbp: date important, not date run but the date of the tool.

hb: aserting tool independent of test cases.

sbp: then a different set of test cases.  new URIs on test files.

cr: perhaps dump date and keep version.

wl: when speaking of primer, wonder if the more important is the one that 
goes with this program.

cr: description of rdf it creates?

wl; the reasons for EARL go into the "how do you use the new tool."

cr: description why outputting earl?

wl: yes and how you use it.

cr: i've got a one page html that tells how to use and what it does.

wl: can incorporate the why.

sbp: perhaps i should publish our messages.

cr: sure

sbp: open source?

cr: think so. if want to can take a look.  looking at how to make more of 
our stuff open source.  looking at cvs repository. (we=ATRC).

wl: relate to annotea in any way?

cr: no.

wl: it's a possibility to have this stuff stored on a server by putting a 
link to output to annotea.

cr: how use output of EARL?

wc: use to make a comparison chart of which tools support what and how. 
but,  brings back to wl question: if public, then everyone run tests and we 
can pull it all back together.

cr: biased towards a-prompt. more credible if test files and tool hosted by 
w3c.

action wc: discuss w/wai team about hosting test files, tool, and tool results.
--
wendy a chisholm
world wide web consortium
web accessibility initiative
seattle, wa usa
/--

Received on Monday, 23 July 2001 10:35:21 UTC