- From: Wendy A Chisholm <wendy@w3.org>
- Date: Wed, 07 Mar 2001 14:25:48 -0500
- To: w3c-wai-gl@w3.org, w3c-wai-er-ig@w3.org, w3c-wai-au@w3.org
- Cc: dhipke@microsoft.com
http://www.w3.org/WAI/ER/2001/03/01-f2f-minutes.html
Thanks to Jan who helped minute. Donovan also minuted, and I'll hopefully
be incorporating his notes soon.
Minutes from Friday's WCAG meeting will be sent separately.
--w
01 March 2001 F2F meeting minutes (AU/ERT/WCAG)
The minutes for both the morning and afternoon sessions are contained in
this document. In the a.m. the AU WG and ERT WG met along with several
visitors. The p.m. session was a joint meeting between AU WG, ERT WG, WCAG
WG and visitors.
Summary of action items, resolutions, and open issues
· Resolved: WCAG Will consider EARL as basis for conformance. Places
this on the open issues lsit.
· Resolved: WCAG undertakes in defining techniques, ensure that each
requirement can be referred to w/sufficient specificity to enable test
results to be associated with it.
· Resolved: add to AU issue list: add requirement that tools import
EARL. note that you can't reasonably decide until you hear that EARL has
firmed up.
· Action CMN take issue about UA checking EARL statements about a
page to ERT and UA.
· Issue: implications on EARL as to how to express for UA.
· Resolved: WCAG will adopt those aspects of AERT into HTML
techniques that are relevant.
· Resolved: do this type of meeting again at a plenary like this.
Participants (not all present for entire day)
· Len Kasday - chair, Temple university
· Charles McCathieNevile - W3C, AU staff contact
· Lynn Rosenthal - NIST
· Mark schaol from - NIST, software testing
· Loften Henderson - OASIS, testing and conformance
· Cynthia Shelly - OpenDesign, authoring
· Kim Keene - Dept of Commerce, 508 coordinator
· Andi Snow-Weaver - IBM
· B.K. deLong - ZOT, web development community, tools for accessible
web development
· Karl Dubost - W3C conformance manager
· Daniel Dardailler - W3C, starting QA with Karl
· Wendy Chisholm - W3C, staff contact for ERT WG and WCAG WG
· Marja-Riita Koivunen - W3C, WAI and annotea
· Phill Jenkins - IBM, authoring tools apply to other evironments
· Matt May - Webvan, WCAG member
· Jutta Treviranus - Chair of AU, ATRC U of T
· Art Barstow - W3C, semantic web, RDF
· Brian McBride - chair rdf core wg
· Jan Richards - co-editor ATAG
· Eric Prud'hommeaux - W3C, semantic web
· Dimitris Dimitriadis - DOM test suites
· Michael Cooper - CAST, Bobby project manager
· Libby Miller - Uni of Bristol, RDF
· Rob Neff - WebSpots, implementation
· Brian Matheny - bobby tech support
· Ralph Swick - co-editor RDF model syntax, semantic web, annotations
· Josh Krieger - CAST
· Susan Lesch - W3C, review for WCAG
EARL
LK Language used to describe if web content is accessible. more general for
other types of testing. comes up with vocabulary that may be as wide as
possible. the examples that are canned deal with accessibility of web
pages. scenarios for uses of EARL /* link */ We want to get specific about
vocabulary.
LK In web accessibility, there are many places where the judgement has to
be made by a person. e.g. an image with alt-text. If you have an image of a
cow and the image is a horse, you need a person to recommend what the
alt-text should be.
LK 1st scenario - person gets a report. need way to combine comments from
various people or tools. compare results to one or more standards - in the
states such as 508 vs WCAG. Ideally, a language to translate between
standards. A corporation may have their own standards.
CS Discussion in WCAG of a conformance schema, what's that about?
CMN Instead of conformance to guidelines at level X, have finer-grained or
partial conformance. e.g., I use a mouthstick, there are 9 checkpoints in
UAAG that are critical to me but don't care about visual checkpoints. Can I
use this system to do that. one goal of using RDF, is to make sure you can
do that.
DD The 3 WAI guidelines are for web content, user agent, authoring tool -
what it means to be accessible for each. We have been good at specifying
conformance to each guideline. Each have checkpoints with priorities. There
is a conformance claim you make, Level A etc. (DD explains). In terms of
the accessibility report, we have a way to point into our spec that we can
directly reference. The problem is made complex by the requirement of human
judgement. We can not automate everything. If you co nsider SVG test suite
it is harder to point at testable assertion, but the test is look at
something - is the circle red. It is more objective at looking at assertion
but harder to point at what is the purpose of the test. This language will
have to take into account both kind of evaluation.
LK We have a problem of specifiying conformance in SVG, are there other
comments on that before we move on?
RN From the implementation side, if you break it into qualitative verus
quantitative, when we write tests, we have to document our requirements. I
would like to see a requirement based guidelines, where i can easily see
subjective or not. if quantitative i can write a test for that. otherwise,
what a space for comments or someone else to read.
LK For qualitative judements, in addition to a statement would you want a
scale?
RN feasible, but still human judgement. 1,2,3 can say qualitative. if p3
and qualitative, i may not have resources to do that. how do you get buy in
from the retail side. have to get it as easy as possible. get a quick hit.
CMN Two threads coming together: how do you test something against a test
suite. when we looked at this, rdf seemed a good pick since it let you deal
in things with URIs. if your test suite is 10K cases and each is a page,
and requires a human. as long as you have what needs to be done described
with a URI, then using RDF can say, "this conforms." ATAG gives tests for
how to determine conformance. 1. can you insert an image? 2. can you give
an equivalent? each has a URI.
LH Unlike accessibility standards, we don' thave a clear set of test
assertions. In practise, I synthesize those where implied. You extract a
doc fromt he spec which is test assertions. then you apply those same
processes to the intermediate document.
DD you create a test of chekcpoints?
LH You have to deduce them.
LK You can imagine a lang where each output statement refers to a
particular test, or imagine that it points to something more atomic then
derive if particular checkpoints are produced. You can say this image meets
checkpoitn 1.1 or alt-text exists. then you have another set of rules that
says, 1.1 is ... and in 508 it's ... i wonder if that would be aof
ofrmalizing that.
CMN In the toy that danbri and i made took advantage of that. we had atomic
tests with rules. if you meet all p1's then you meet the atomic test which
is wcag level A.
CS Would it be useful to create markup to define those standards? A way to
describe why want test and rules for what test against.
CMN The user interface is the key bit.
CS Some people will do it directly in XML with a text editor, some people
will want a GUI.
DD I would like to hear from lynn or lofton, we're not inventing something
new here. People have been working on results to tests. They were specific
to a testing technology or specific technology. OASIS has come up with XSLT
evaluation. The etc people have an ISO framework for presenting test
result. keep in mind there are specification of requirement.
LK As people bring up requirements, I want to list them here. Is there a
meta requirement.
CMN Derive a result from other results.
DD These look like requirements for harness rather than language.
RN The language has to generate a report. Also add, modify, delete and
change my report. I don't care what it does, what I want is a report that I
can hand to management. With 508, the coformance is up to each agency to
decide how they will handle it. It applies to corporations and companies.
No one wants a document, it can be supoenaed. You have to have a report.
Who are the players and the end result?
DDimitriadis In specifying DOM tests, the language is indpendent of adding
mechanisms to the test suite. You extend your test suite rather than the
language.
DD Have to be clear not to confuse the language (keep track of test
results) with pretty output. Another language is the language to express
the test. WIll find lots of same data in the result.
Lofton An ad hoc design for SVG work. Similar to test expression language
that NIST has done. It's simple XML grammar that encapsulates the test
purpose, the checkpoints (operator script). Descriptive prose on what you
should see, what should happen, how to issue pass/fail.
LR If you are doing conformance, when you test for it, it has to trace
itself back to a requirement in the spec. You are bound in scope to what
the spec says. Our test suites are based on the same philosphy. We have
test assertions (your checklist), then test purposes and cases. We
represent this in a DTD. Easy to add new tags. All tags have an
identification, purpose, owner, etc. Lots of identification info. You can
generate almost any kind of report you want. On the harness, you can always
view the source code, find the test requirement, link to that clause in the
spec. It's not like a formal language.
PJ The title EARL is what I thought we were working on, but when I hear
"test language" and "reporting lagnuage" I am concern about scope.
LK Testing is expansion of evaluation?
PJ test language is source of running the test. EARL says there is an URI
for the test thatmaps to a requirement. When we specify language of the
test, perhaps too low level. Want to make sure that things scale. Want to
scale at different points. if I have a checkpoint that syas "provide alt
for content." different ways to do it. can say do or don't vs. having test
id's for what i can do to validate.
LH I think the weakest point in our work is the expression of results. I've
seen everything from our earliest suites (a pad of paper), to work with DOM
where you get a visual highlighted result back (indicating pass/fail).
That's what itnerested me in this work. Not sure see the distinction
between evalution and report. Report is result of running evaluation.
LK Evaluation is something that could be read by an authoring tool as well
as particular report.
JT One way I envision this being used, if we have a number of different
evaluation tools and repair tools, a language to express what the authroing
tool evaluates and repairt, then can match up report with a repair tool
that repairs those pieces. Report could contain the EARL.
PJ Making conformance claims is different than results of tests.
CMN Testing is good for feedback on the spec.
DD Scenarios where person does not know how to fix, all you may know is
that hte browser has a problem. The repair info is valuable in others it is
irrelevant. Therefore separated in the language.
DDimitriadis Description of test could be x language y technologies. We
want one eval language, so yes separate. While specifying DOM, if write
atomic tests, e.g. I want to test 1.2.3 it is easy to do a conformance
rating. You generate result then put indication on result. We would like to
have scenario driven tests, "a user wants to add 5 lines of text and change
the screen, etc." Can not express is atomic points. Therefore, output
language probably not output. Put this on the agenda. Capture things based
on prose descriptions.
CMN The repair info that you might carry and ultimately the value is so you
can use it to fix tings. the repair comes out of the tests not the
language. make sense to make sense to make enough use of language that can
say, "here is the test. here's what it says about how to fix." this outside
scope of language itself. I used RDF because the test becomes a URI. I'd
like to see us talk about what the language might look likt and what RDF
might do for us.
MRK I don't see a problem with the language containing more info. The UI
can sort it out. Put info about what needs to be tested, what has been
tested, Advice for how to repair, show in different place.
KD Possible to make scenario based on atomic test. Can we establish
relationship between test and scenario.
CMN We used RDF to describe and needed inference engine.
DDimitriadis Writing tests or results?
CMN Ennumerate tests. Then ennumerate rules for combining tests. If you
pass 1,2,3 you get stamp A.
PJ Very faimilar with XML, don't know RDF. How does it relate to XML.
MRK Scope of the tests? Explain part of the document needs a test and the
whole doc and then the site?
CMN Anything that has a URI - a single element, a whole site.
MRK Put them all together.
RN In my informal survey, there are 1 in 50 that state in contracts: use
HTML 4. don't want to bind themselves. Need to carry on explanation why
something fails. need good examples. easy to read. one centralized tool: do
HTML, do x. One access point. re: test conditions: implementation issue is
that there are over 40 browsers where you can't turn javascript on/off or
css. if make test condition, "now turn off css" or "invert colors." then
you don't rely on how a browser works. it will tell you how.
LK We've been hearing about meta requirements. In terms of pinning down how
and if use RDF. Which of these considerations are teh key things we should
focus on.
AB Why would you consider using RDF? XML?
RS This a.m's discussion is about what you want to describe and how to
point people to more info. In the course of describing these things, you'll
figure out which concepts to encode. RDF and variety of notations will do
in a variety of ways. must first figure out what want to express.
LK Ground rule: as we talk about general requirements, it should be
accompanied by a concrete example.
CS Like to see output importable by bug tracking systems. What line of
code, test, etc. If you end up with 1,000 reports.
LK Want something that could attach any sort of machine readable info?
CS Want to see output, may be a tool issue. Bobby does this, when you get a
report.
CMN Why RDF instead of XML? Wanted to refer to anything in the universe as
a test. RDF model, X has relationship Y to Z. X, Y, Z are URIs.
LK How does that grab you?
AB Right. Seems that people needed clarification.
RS When LK and I talked about this. One question that we thought we might
want to answer was to describe the e-mail traffic over the last few weeks.
Ultimately, what we do with this will be expressable in XML (i.e. no angle
brackets in my description) rather the mechanism. I drew this example to
tie to the N3 description. The notion is that we have stuff. Each oval is a
thing. One is "pat" two don' thavfe names. there is a relationship between
pat and the 1st unknown thing called "child." imagine that the objects
we're talking about are some bit of markup that exists on the web. the
thing on the left is an XHTML markup. it has an img element that fails one
of your tests. #img failsReqruiement http://www.w3.org/some-wcag#p1.
document and conformance claim that is being failed. what is the
conformance test? that's for you to define. prose is good for human
readers, not for machines. What else can we say? We add relationships
between the conformance tests and what repair tools can do. You get to
describe the relationships, their names, their semantics. We can name
anything. failsRequirement is a name of a concept. it will have a URI that
you can follow to find out what it means. We can find out info about any
thing in this relationship. To make it easier to express what's going on in
this image, we have RDF and N3 that we can express it in.
/* go to pat example with 2 children example in N3 primer LINK */
RS What are the attributes of tests that we want people and machines to
discover? What does this relationships mean? How we write them in e-mail,
we need to agree on notation but focus on concepts. Don't spend a lot of
time on syntax. Further info is who made the claim, what did they base it
on, what tool decided it was the relationship that exists, etc. Let's
define the concepts then go to the RDF spec.
CMN What makes this useful in the real world? The use case that PJ and NIST
have. Do testing, get results, etc. Another is Jutta's. If a small shop may
have 8 tools, if larger have 8 departments. What kinds of tools exist?
RN Lots of tools exist. Test conditions will come upon the tools and pages.
We need to prioritize what we do first.
JT seems to be an open issue about what EARL will actually be used for.
DD Here is a vocabulary of things to express.
testinfo - framework (wcag, svgspec, htmlsyntax), test id, test purpose,
code?, manual y/n
result info - individual result (url pass/fail, confidence level, syntax
error line)
repair ino - (optional)
run info - contraints platform, operator instructions (merge EARL Language
with harness language)
are there additional items we should store and process? /* get rationale
and requirements from DD */ perhaps syntax error line is based on something
more generic.
LR Under result info, I would like to see person info as well as date tests
are run. It is important to note that these reports can be used in
different purposes and ways. Dependin gon if i'm going to repair or as a
report to claim conformance, the type of info i get out of the report or
how i use the report will be different. think about how used to make sure
it's all covered.
LK After break let's flesh out addtional info that we want here.
/* break - most of the RDF folks leave */
/* DD changes "schema" displayed on screen on the fly to incorporate
comments */
RS notation and syntax are not user interface. The purpose of the syntax is
to exchange stuff between tools.
RN In two week sin new orleans is a conference by carnegie mellon. seems to
be synergy.SEI - Software engineering institute. Model for how you do
software engineering. Those are foundations for building software.
RS the way you specify the date of the result and the page what information
do we want to record.
LK The question is do we want to record an individual date and stamp for
each element? Some images change. for example, banner ads will change.
RS Test date applies to all.
WC Not necessarily. Test could be on an element or even attribute level.
Tool could only care about one or other.
RN code on the page, versus what's on the page.
CS Separate test from subject of test. Subjectmight be page or image,
runnin ghte sam etest.
WC Harness?
DD framework of interaction that leads you to a result. there are languages
to express how you move from page to page. The test info is part of the
harness.
LK The question on the floor: what does the test mean adn what is the scope
of these thigns?
CMN The scope of the test is whatever you are making the assertion about.
it should be a requirement at an atomic level at an individual object, or
page, or whatever atom you can think of.
KB We need to say that something applies to more than one URL. e.g., ,in
alt-text applies not only to image file but image tag. assertion of
alt-text is about 2 things.
LK good point. let's get to that point after this one.
CS It seems that that is not a test. The test is what you do. If you are
testing a site, you might test index.html (subject) test is "does it
conform" don't wan to mix those in a single term. suggest s wording change?
test case - do these steps, and test object.
CMN Think we're in agreement except on terminology.
JT 2 pieces of inof that relate to test: what the outcome is and how we're
testing it.
JR Instead of something fails, it's that "it fails on the basis of the test."
JT What is the test testing for?
RN Trying to describe test conditions.
CS language should express both automatic and manual tests.
CMN In one sense there is a test that you run. it says "open browser, turn
on the feature, make it happen." that is test case info. result info says,
"thi si sthe verson, browser, etc." Or test case "go to this store to this
shelf and buy the 3rd box...etc." whatever it is, you need the instructions.
CS I have seen this factored effectively: test case - what you have to do
again later, result - what happened.
CMN Are we testing rendered stuff or code? 1 - http.w3.org/image does the
alt-text work? human result. 2. validator says some element fails some
requirement of html spec. completely machine run. code level. don't think
we are writing the test language, we're discussing what we expect to be in
it. ratehr the result language.
KB Generalize platform to context.
RS Don't get rid of details like platform - will help you keep track of
details.
PJ Depends on the test you are doing.
RN I see 2 issues. we're all users. perhaps we're getting caughtu pon test
methodology. we're trying to show our wish list. can we brainstorm what we
want, then we could talk to a test engineer for the best approach. best way
can change - test case/test suite, etc. what do we want to accomplish?
RS Don't worry too much about packaging. I suspect that one objective is to
capture as much info as we can so we can precisely reproduce the test.
Focus on what data needs to be captured and for whom? don't fall into the
trap of generalizing from one detail. Capture as much detail as comes out
of the brainstorm.
JT We need to address what will the info be attached to. Describe the
results. Scenario: i head company x, we have a document that shows how to
conform to 508, must do these 10 things. i want to find a test tool that do
these things (perhaps more than one tool). then want to repair them. i then
find tools to help me repair these things. need description of evaluation
tool and repair tool and what they test or repair.
CS In a bug report i want to see: the environment (the context that the
tester believes are important - might be platform, browser, etc.),
reproduction steps, date test run, version of code aganst which run, steps
of the test, how make it happen, the expected behavior, observed behavior,
optional things - general comments, line number, messages created by
system, copy of code, screen shots. for interactive testing, but could be
applied to automated testing. e.g. reproduction steps: run page x against
schema y. expected results: i get this file with these results.
LK sequence of steps looks new.
CS If a test is not reproducable, is an error on the tester. expected
results still not captured.
LH Should be in test case info.
CS actual code is not there. image 3 could now be image 5.
CMN Only if you've changed teh page.
DD Could also have copy of subject of test.
CS Likely thta the test is run on monday and person who fix won't get to
until friday and will change by then. won't always be available, e.g.
coming from a cgi.
MC Line of code may not be approach. Could have 10 images on one line.
CS Enough geeky info so that developer can find it easily.
RN Specific test condition: if doing test for wcag, could pick up info in
browser, what about screen readers that dn't provide that info? would we
have a box for other devices?
CS Suggest discuss at tomorrow's meeting to discuss context.
LK Let's capture everything.
RN Other device.
CMN I want to talk about tools in general. I want to know what tools exist
already
KB We need to be able to include a cc/pp profile.
LK Can you make that a specific case of something more general?
KB All the things we're talkingabout as devices and platforms, one way to
do it is cc/pp but not only way.
RS Partly a question of how much you snapshot.
PJ concerned about things changing.
RS recommend a practise that when refer to test case, if we change test
case give changed version diff identifier. how much histrory do you want to
record. propose in each circumstance how to record change info.
RN Do we have the export to another tool?
DD requirement listed. just write a parser.
CS Does require some requirements.
DD Might account for steps in the form. may nt have the data.
CS For these sytems, be invalid and not be imported
CMN A value of including stuff by reference is you can ignore it by
reference as well.
MM if there isn't some way to extend this than how we're envisioning, it
won't be useful to all.
LK Doesn't this get into what RS was talking about? The benefit of
implementin gin RDF is that you can extend it.
RS Right. having a flexible structure, rather than a fixed DTD..most DTD
tools break. suspect want a structure where developers can include
structure of own w/out interfereing with other tools.
LK Is this captured here?
DD Part of the framework. It is a requirement.
CS agree has to be flexible. think the verbs should be user definable. may
only care about pass/fail, or perhaps something richer. might want to look
at modularization. perhap automated module and manual module.
DD Captured in subclasses - page/ataga/uaag/suite/bugtrack.
CMN good thing about rdf module can say how much you are saing then
qualify. different scope of statement - on eline, one word, etc.
DD's notes from discussion (from front of the room visual display)
rationales
· keep track of test/evaluation runs (store/transfer)
· processing in tools (pretty printed report, generate compatibility
chart, feed other tools, compare with previous runs)
requirements
· applicable to testing content/ua/service
· generic vocabulary/extensible subclass for
page/atag/uaag/suite/bugtrack...
· uri based
EARL schema
· testcase info:
· test suite id (svgspec, wcag, htmlsyntax...)
· assertion id (detail in framework: checkpoint, intermediate test
assertion)
· test purpose (prose)
· test url (for online test suite)
· auto/manual y/n
· operator instructions, reproduction steps, expected result
triplet:
· subject of the test (url, version/date, snapshot)
· result statement (date, what, who, context: ccpp, platform,
devices.. comments, repairinfo)
· testcase info (suite, url, purpose, op instruction/steps/expect,
manual)
e.g.
· http://example.org/page#img[3] danield says it fails with lynx on
linux, xmas 99.
· http://w3.org/tr/wcag#cp1.1, using bobby, manual check
· http://foo.com/svgplayer1.23, passes
· http://w3.org/tr/svg/ts/assertion1, part of
http://w3.org/svg/testsuite1.0
· http://example.org/page on 2001/3/1, syntax error, line12, missing alt
· http://validator.w3.org, auto mode
Next steps
JT This afternoon: AU/ERT comment on WCAG 2.0 then go through AERT open
issues WCAG
· Agree on process for dealing with HTML specific AERT oepn issues.
· ERT present EARL to WCAG - discuss possible conformance claims A,
A+, AA, AA+ etc.
· WCAG make machine processable assertions? ERT write them?
· questions about testability in 2.0 checkpoints.
· ER and AU comments on WCAG 2.0
Afternoon session (AU/ERT/WCAG)
Agenda
· Open AERT issues for WCAG
· EARL and WCAG
· EARL and AU
· WCAG 2.0 and it's implciations for AU and ER
· Timing of 2.0 guidelines
Participants
CMN, Raman, Helle, Loretta, Jason, Andi, Cynthia, Marti, Jan, Wendy,
Donovan, Marja-Riita, Michael, Brian, Josh, Matt, Harvey, Len, Jutta
Open AERT issues for WCAG
WC HTML specific
LK Solve in relation to 2.0, unless cause need for errata for 1.0.
JW Right, that's what we agreed up on in WCAG a few weeks ago.
CMN The issues seem mostly techniquey, but valuable to go through them.
Spend 2 minutes on each of them or 2 minutes and agree to postpone. Useful
from AU perspective for how WCAG will approach them.
JT Valuable piece of going through today, might relate to revision of WCAG.
WC move that we either limit our time, we have 3 groups here and more
interesting issues to discuss.
JT limited discussion at beginning? 1 limited at end? 5
EARL and WCAG
LK Are tehre people here who have not heard about EARL? Briefly, it is a
machine-readable representation of evaluating a web page. Could contain
suggestions for repair. Applications: raw form that would go into a report
tool. Several different EARLs, conditions, could be merged into one report.
(human readable). also fed into an authoring tool for convienient mechanism
to give tool list of what to repair. in terms of WCAG, there are 2 possible
ways it could impact WCAG. if we assume only used with WCAG, then each
statement could point into techniques (in terms of 2.0) and then give
"passes/fails" or other rating. Another appraoch is for EARL to produce
lower level statements that don' thave checkpoints in them. e.g. "alt text
is missing." Rule sets that take that as input and in the context of WCAG
that violates 1.0 or 1.4. or 508 ruleset. with 1st apprach, wcag would have
machine-readabvle reference. possible to point into wcag. instead of plain
text, in terms of HTML it would say "this is missing an attribute." on the
other hand if EARL reflects more fundamental facts from which you derive
checkpoints, then need rule base to convert to WCAG statements. then, who
writes the rule base and is it infomrative or normative?
CMN Interpretting that into checkpoints. working out how to do a test of
your spec.
RN Timeline for implementation of EARL?
LK Nothing has been committed to?
RN We have people interested in EARL. Talking to Karl Dubost, he said there
was a QA conformance meeting in late march.
CMN I will be there to represent EARL.
LK We are interested in those people joining the effort.
RN One tool on the interest group does a WCAG conformance guideline test.
Will there be correlation?
LK This is a format for out put of tools, not a tool.
CS One bullet point of th eEARL proposal is to compare results to one or
more standards. e.g. internal intranet sites. sounds like something like
specML. If feed a spec in, it has to be machine-readable. what would that
look like? It would be a wonderful thing to have.
CMN COmpare results, isn't new. use an rdf thing. does not provide testing
language. a separate project.
CS How do comparison?
CMN When do the tests, ability to record test and say what test was.
CS How does the comparison work?
CMN If you have 15 things, img have useful alternative? etc. then say 508
conformace you need these 3 tests, for wcag you have 2,3,7,9 out of a
million. designed to allowo ther tests to be allowed.
CS MAchine readable version of WCAG is out of scope or not? that's what i'm
trying to determine.
JT It's not a testing tool, it's a method for tools to talk to each other.
LK As far as our charter is concerned we have a general clause, "helping
development".. My personal opinion is that without formalizing it, using an
existing method with rdf. don't want to invent something new.
JW er will have to resolve how they want to support multiple specifications
in their system. if they think it's useful that the test output should
bemapped to a requirements of multiple specs, then they must figure out how
to implement the system. then a joint work item between wcag and ert. what
are the issues that involve both groups that need to be decided, e.g. issue
of conformance claims. Daniel put together a schema for which to make
conformance assertions. If the new language is appropriate for that role,
and provides appropriate granularity, then WCAG may adopt as base for
conformance claims.
LK At this point, judging by the discussion, people are still formulating.
I don't think we could come to a conclusion this afternoon, it is more of a
heads up. The simplest issue is to at least point into text readable
portions of it.
JW Any XHTML version including techniques would povide anchors. 2.0
techniques written as checkpoints in 1.0 version. possible to refer to any
of them from an external source.
HB The EARL has importance for potential readers for them to say, you
conform at a level that my AT can handle. Should there be a standard link
fromt he doc to the place where teh review may be, is something you may
want to consider. versioning issues: if successful with EARL, go back to
creators of docs. particular review can age. we should be able to notify
the reviewer any time the doc has been updated.
MC EARL as a language speaks to saying this thing conforms with that and
that is a URI. machine readable goes into defining what "that" is. for some
guidelines be easy, for others not easy. abstraction of guidelines from
techniques helps. can put it in the context of a specfici language. some
can not be tied down to one thing. e.g. navigation bars. how define that in
such a way that i will always find it and never not find it. what that
means for wcag is that it's an issue when thinking about the gidelines.
while not tying the guidelines donw to being too specific, need a way to
define what apply to.
MM Making a machine-readable spec, lends self to automated tool. techniques
will be non-deterministic. e.g., wherever we use the word minimize. we can
not look at a doc and determine if we have minized the use of images.
CMN versioning isrequirement for EARL, out of scope for this discussion.
already listed as reuirement. notifying reviewers is not yet in EARL. EARL
let's you point to a test. agnostic about test if it is human or manual.
mapping of how you test if you've met a requirement of wcag, is something
we have to do. we're oblighed to show hwo you conform to the spec. whether
we do that by writing up each technique with a test case or if we do it
afterwards and how is something we have t do. if we make those available,
people reuse known tests. not a wcag issue.
JW Can we brin gthis discussion to a few specific points.
· WCAG should decide that it will consider EARL as a possible basis
for conformance claims
· WCAG should affirm that designing 2.0 techniques, it will ensure
that each technique and approach to testing are identified with specificity
so that EARL can refer to.
we need to confine ourselves to how we will carry this forward.
/* vote */
JW Only members of WCAG allowed to vote on this.
WCAG Will consider EARL as basis for conformance. Places this on the open
issues lsit. favor 11, against none. Resolved.
WCAG undertakes in defining techniques, ensure that each requirement can be
referred to w/sufficient specificity to enable test results to be
associated with it.
CS clarification - we won't use words likeminimize or make them atomic enough?
CMN test has a URI
KHS We'd have to work hard to not do it.
/* vote on JW's point 2 */
Resolved: WCAG undertakes in defining techniques, ensure that each
requirement can be referred to w/sufficient specificity to enable test
results to be associated with it.
LK question of machine readable rules. don't think we can decide it now.
MM w/out resolving normative vs. informative, end up with orphan
checkpoints. EARL and related testing tools won't be able to test for.
CMN Resolving normative vs informative is work of group in explaining what
it means to conform to.
LK I think there is a misconception. Inability to automate does not put it
outside scope of EARL. It can record results of human judgement. e.g. clear
and simplelanguage.
MM: QA dept. won't be able to determine "minimize". There will be orphan
requirements that beg the answer "well...ok...sure". Not a fully formed
compliance with subjective tests.
WC: There will always be subjective tests. As long as manual steps are
clear, no problem. Claims will always be subjective. If assertion is
machine readable, easier to communicate your case.
CS: "Minimize" is defined at the beginning by designers not athe end by
testers.Assertion made following design.
JW: Proposes to move to the next agenda item.
CMN: Still valuable to thrash this out.
PJ: Wants to talk about necessary and sufficient conditions for techniques.
CW: We whould continue discussion.
/* vote - defeated, discussion will continue */
KB: Agrees with JW.
CMN: EARL doesn't care about wording of tests. It does allow different
results for same test. Primary use is not legal, its so stuff will work. It
builds up a body of previous test judgements.
Josh: EARL expresses too much. WCAG techniques need to be more clear about
what repairs are.
MM: There are design principles that use "avoid" or "minimize".
LK: Subjective cps or techs are totally irrelevant to EARL.
MC: Lots of challenges for EARL to handle conformance for WCAG.
Katie: If terms need to be defined consistently, its something for the
glossary. Implementation model for setting up an accessible site.
WC: Passed to EO.
WC: Would like higher conformance granularity.
CMN: Implications for AU and ER not EARL. Let's move on.
JW: Yes, let's move on and leave aside WCAG issues.
JT 4 issues here: conformance statements, operationalizing requirements,
evaluation, EARL language. Not just about WCAG relates to all.
RN Plant a seed: spec needs to read more like a requirement. requirement
rather than words.
KHS important in 508 stuff to appreciate good faith efforts.
EARL and aU
JT Issues are teh same.
CMN Yes, discussion would be teh same except substitute ATAG for WCAG.
LK As far as applying EARL to specify if an au passes ATAG, but one purpose
of EARL is an info feed into authoring tool. Do you need to take that into
account when write spec? bolted on later?
CMN implementing EARL is a useful technique for bunch of AU stuff. not an
issue but an action on AU WG.
HB Also action on EO. usability and accessibility are foreign to most all
of the books that address web authoring.
LK If i create a EARL report, there is a issue that AU wouild import an
EARL report. specific scenario: dreamweaver. should it be seriously
considered? if so, back of mind with other ATAG requirements.
JT How is EARL used to express ATAG conformance and how it is used to
repair doc.
RN If stored where could be referenced locally, great. Agree with Harvey,
right. We need to state proper terminology.
CMN EARL is considered. Does not give rise to other requirements or issues.
implementation issue.
JR Not sure how EARL is stored, a file or info w/in a doc?
LK That's implementation detail. Info is there, linked with the doc. Like
CSS - in head or linked.
CMN ditto.in ATAG techs, havce to figure it out.
JR One reason this comes up, originally it was so wouldn't have to do
manual checks over and over again.
CS if becomes requirement that extra XML included in doc, likelyu not used.
JR could be stripped out b4 published.
CMN maintain stuff or toss, depends on tool or the stuff.
MRK are mechanisms like annotations. have RDF servers.
LK propose: there be an issue added to AU issue list: add requirement that
tools import EARL. note that you can't reasonably decide until you hear
that EARL has firmed up.
Resolved: add to AU issue list: add requirement that tools import EARL.
note that you can't reasonably decide until you hear that EARL has firmed up.
HB Server should deliver EARL evals to requesting client so that they can
deliver appropriately to what the user's capabilities are.
CMN UA check EARL statemetn about a page.
Action CMN take issue about UA checking EARL statements about a page to ERT
and UA.
Issue: implications on EARL as to how to express for UA.
/* break */
/* donovan minuting */
/* wendy's personal notes */
priorities - inherit - govnt policies are saying Level A of WCAG, and not
necessarily of WCAG 1.0. Requirements - general to HTML - people need to
test for each language. end up with 1.1.1-1.1.15 for HMTL, 1.1.16-1.1.26
for SMIL, 1.1.27-1.1.37 for SVG, etc.
broken up by technologies, but we also have our core techniques which are
cross-technology.
top level also for policy makers and managers.
"fractal worlds" navigation paths through different types of users ("how to
use this document"). Techniques will still exist. HTML checkpoints vs.
techniques (examples, screen shots) formatting - still have to collect all
of the data.
PJ what things are necessary, what are minimally needed, etc.
open issues: CMN guideline, checkpoint, requirement, technique OR
guideline, requirement, checkpoint, technique
WC themes, guidelines, checkpoints, techniques instead of grouping them??
4 themes, 22 guidelines, 89 HTML checkpoints, 82 SMIL checkpoints, HTMl
techniques, SMIL techniques etc.
techniques database - marti. AERT open issues will be incorporated by WCAG
2.0 techniques. agreed upon structure - framework.
/* WC returns to minuting */
Resolved: WCAG will adopt those aspects of AERT into HTML techniques that
are relevant.
Other cross group issues
JT in courseware packages there is content that allows interactive
authoring via the content. also, when is the best timing for ATAG 2.0 to
draw in WCAG 2.0.
WC CR.
JB Get input in lower tiers so don't have to go back to the beginning of
the game.
CMN The goal is that we would shadow up through the process. Best guess of
going to last call and then CR.
WC then shadow our working drafts.
CMN Give us warning for last call.
JW I've seen W3C specs taht say "editors and working group believe next
draft would be
last call." this good idea.
Next F2F meetings
JW For all three?
CMN Do we want to do F2F meetings in conjunction again?
JB Encouraging suggestion: in terms of W3C planning process. Interested to
hear if have morejoint meetings. what are your plans over teh next year. in
generating ideas, we need to rotate the meetings geographically. We end to
get to europe and asia.
CMN bunch of AU people aren't here. this has been really useful.
particularly where we have a joint meeting, then have individual groups to
have meeting but can run next door to get clarification.
WC At October, WCAG proposed going to PR in November of this year and doing
that in Australia.
JB Poll yesterday "wantto do this again?" "6 months?" 38 yeses. "12
months?" 38 yeses. so another opportunity between 6 months and 1 year.
CS How many people going to Hong Kong?
/* 7 */
/* unanimous - do this type of meeting again at a plenary like this. */
RN I like the technology meeting yesterday to find out what is going on.
JB how soon to do next one?
6 montsh - 3
12 months - majority
9 months - 2/3
location - australia, europe, japan, greece, haiti(?)
What aboout a WAI half week (3 or 4 days)
Lots of joint, plenary single group stuff.
JB many organizations say they can not get approval for meetings inHawaii
since a junket. does 4 days considered a week?
CS flying where have 18 hour flight, 8 hours of meeting, 18 hour flight is
impractical.
JW Europe easier?
CS For me europe and australia are somewhat similar. (in terms of travel).
Helle More serious talk about going to europe. we have very few people from
europe here. particularly, where we have smaller companies who are not part
of the w3c work but interested in accessibility or required to consider
them. will help outreach. must do more in europe.
KB Agree with europe idea. We were bought by a european company. Brussels
would be good.
JB The WAI has an explicit obligation this year and next to hold a certain
number of meetings. We've complied with, since the metings in brisotl
however the intent is for outreach and recruitment. there are fascinating
things in europe, due to the commitment on the european union level. an
initiative w/in each country. everyone is doing it differently. a WAI most
of week in europe would be great. Greece is one of the worst.
WC What countries recommend?
JB Can't answer.
WC Finland?
RN Couple this with an outdoor activity. Like an adventure through a museum
or some way to enjoyment of the area. The bonding.
$Date: 2001/03/07 17:50:11 $ Wendy Chisholm, Jan Richards
--
wendy a chisholm
world wide web consortium
web accessibility initiative
madison, wi usa
tel: +1 608 663 6346
/--
Received on Wednesday, 7 March 2001 14:15:22 UTC