minutes from 20 june 2001 GL f2f

apologies for not getting these out in a more timely manner (read: in june),
but i was unable to send them from the f2f meeting, and have been wrestling
with my laptop ever since, attempting to extract them from it...  i finally
liberated them from my laptop, so i am posting all of the materials i have
from the f2f in a very raw form -- without compiled lists of action items,
resolutions, proposals, and open issues...   if anyone else has any
materials, please send them directly to wendy (wendy@w3.org)

again, i apologize for the unavoidable delay,
gregory.

-- BEGIN MINUTES --
Web Content Accessibility Guidelines WG Face2Face
  Date: Wednesday, 20 June 2001 (Day 1)
  Venue: CWI (Amsterdam)

ATTENDEES
 In Person
   Wendy Chisholm, WC (chair)
   Daniel Dardailler, DD
   Katie Harritos-Shea, KHS
   Matt May, MM
   Charles McCathieNevile, CMN
   Antti Raike, AR1
   Jan Richards, JR
   Chris Ridpath, CR
   Adam Reed, AR2
   Gregory J. Rosmaita, GJR (scribe)
   Lisa Seeman, LS
 By Telephone
   William Loughborough, WL (by phone)
   Gregg Vanderheiden, GV (WG co-chair/by phone)
   Jason White, JW (WG co-chair/by phone)
 Translators
   Maya DeWitt, ASL translator
   Caroline O'Leary, ASL translator

1. INTRODUCTIONS

WC: work for W3C, from the USA; I'm here to chair the
meeting in the physical absence of our 2 chairs; hoping we
can get a lot done at this meeting and that people will
leave the meeting enthused and eager to move WCAG2 forward

AR1: European Union for the Deaf; interested in the work of
the GL WG, web very important to the deaf, as well as to all
people, disabled or not; working on distance learning
software/solutions for deaf students

CR: from the Adaptive Technology Resource Centre (ATRC) at
the University of Toronto; editor of Accessibility
Evaluation & Repair Techniques (AERT); lead developer of A-
Prompt tool; from Canada

JR: from the ATRC; co-editor of Authoring Tool Accessibility
Guidelines ATAG; from Canada;

AR2: teach information systems at California State
University-Los Angles; most of my graduate students work
fulltime as IT people in corporations; have a great deal of
first and second hand consulting experience meeting various
needs;

DS: University of Dundee (Scotland); digital media access
group--usability and accessibility consulting group;
department of applied technology--provide solutions for
persons with disabilities; Alan Newell is chair of my
department--internationally recognized expert in the field;
here to talk about usability testing of WCAG

MM: from the USA; background in commercial web site
development and testing; working on HTML/XHTML techniques
for WACAG

PB: at Utah State University; working on project called
WebAIM--mainly an educational initiative to help
universities make

LS: make web tools for people with learning disabilities
that died a sad death due to bureaucracy; carrying on doing
web accessibility stuff--more generalize, use global formats

CMN: with W3C; from Oz; staff contact for Authoring Tools;
editor of ATAG; work extensively as member of the PF
[Protocols & Formats] working group as well

GJR: from New Jersey; WebMaster and Minister of Propaganda
for the Visually Impaired Computer Users' Group of New York
City (VICUG NYC); been active in WAI since 1997, when was
invited to work with the WG formerly known as HC, and now
known as PF; am the Interest Group representative to the WAI
Co-Ordination [read: Chairs'] Group

KHS: from the USA; work for US government agency as 508
coordinator for accessibility

DD: from France; technical manager of the WAI;

GV: from the USA; director of the TRACE Center at the
University of Wisconsin at Madison; co-chair of GL working
group; it's 2 in the morning where I am right now

// DD tries to call Jason White (JW) //
// the minutes go "live" via the projector //
// JW joins //
// the minutes are live!!! //

AR2: this being up to date, the content modes are not in 4.1
and I thought that they had been put in

WC: this is from March; no resolution on content modes

// administrivial discussion amongst the W3C staff people //

GJR [typed to screen]: is everyone who can see 'em happy
with the projection of the minutes?

// YES //

WC: let's go ahead--David, are you ready to give your
presentation

DS: have a floppy with a PowerPoint presentation on it--
could project it while I speak

// WL joins //

// GJR suspends active minuting so that his laptop can be
used to project DS' slide show; minuting continued by CMN by
logging the active IRC session //

GV: slide 11 you said that get feedback from participants  -
- assessment by experts of the content produced

// DD tweaks the phone //

GV: people who are deaf using interpreters have difficulty
raising their hands because of lag in interpretation

GV: slide 11 -- 2 usability items--feedback from
participants; assessment by experts of accessibility of
materials produced -- how are you going to parlay that back
into usability of the GL rather than the expertise of the
users

DS: that stage is there to ensure that WCAG serves the
purpose; by asking experts to validate info that
participants crate allows us to judge conformance to
specific GLs or priorities; which are in fact met; that's
why that was in there--as a quality control over content
produced by participants to see if WCAG doing what it is
intended to do

GV:P temptation to draw causative conclusions -- they didn't
do this, therefore the GL might not be useful in the are --
participant may have made a decision not to use a checkpoint
because they didn't understand or agree; if experts are on
hand, can look at it and see what is missing, which could be
turned into a question t ask participants -- why didn't you
do X -- did you not see it, etc. -- user might say "I saw it
but didn't know how to do it".. etc. -- learn why didn't
follow guideline, rather than assume wasn't followed due to
usability

DS: agree completely--that's an extra stage--structured
interview following expert assessment or observing a GL not
being implemented followed

GV: do as close in time to the event as possible

LS: few questions: 1) said you give the participants the
guidelines and I think that was the first thing that
happened to the participants in this exercise -- no control
no mechanism to know what they would have put in without
WCAG -- give them the task first, see what they do (exercise
common sense, for example) then introduce them to WCAG; 2)
in selecting users -- do they represent a cross-section from
an internationalization point of view; cross section of
people with technological know-how and those who came to
computers later in life -- not one community from which to
draw participants; 3) another issue -- give them existing
web pages and ask them to spot accessibility violations; one
often comes across sites where people have made an effort to
be accessible but have failed--sites tend to be created
using a tool, should be involved in assessing / identify
situations where there has been a genuine attempt to make a
site accessible, but when analyzed by an expert, it isn't;
think that when FrontPage makes a TABLE layout --draws
picture] 2 columns, 2 rows -- top row has logo, etc. other
has side menu -- that's how one would assume they put in
links, in the embedded side menu; in a site I came across
who had made a noticeable effort to make the site
accessible, but the tool used to construct the page, instead
of having an embedded table, have a rowspan of 2 with word
"Welcome": -- new row that includes both columns (right-hand
and left-hand) new products link in left hand column
followed sequentially by "Who We Are" or "Contact Us"
embedded in separate paragraphs as you read cell by cell;
might be really good to compile a list of such violations as
we come across them and give that info to the usability
group

AR2: rather than have a list of violations, since new
violations are being invented every day isn't realistic;
need a testing procedure -- first order testing procedure
for GL1 is "see if everything shows up in Lynx" -- if it
does, then GL1 is about 99$ met; what we are going to need
for GL2 is something were instead of just getting a list of
possible violations is to have these people work in the kind
of tool environment where they have an interactive view of
accessibility and potential problems while working on the
document; what people would expect to work with -- base
computing environment -- isn't a set of requirements on
paper, -- expect an interactive tool that incorporates the
GLs and lets them put them in a web page

MM: similar issues to LS; diff perspective; give them
something to do from scratch -- present them with something
to design from the beginning -- different use case for the
guidelines -- creation, evaluation, and repair; we need to
succeed on both of those; wanted to make sure that there is
a relatively good definition of who is a designer and who is
a developer -- more than just defining an amateur as someone
using a tool -- many paths to writing a web page, whether
skill is server-side development, Flash, javascript --
issues are the same for presenting the content -- designers
see the language as obstacle to design, so they hack it
until design is what they want; giving them a document that
has certain things in color, but structure solid, ensure
that they don't lose structure of document as change
presentation

JR: follow up on LS on role of tools -- would be interesting
to see which tools are being used and whether there are
patterns of violation and which allow WCAG2 to be complied
with; what are the liabilities of the tool, what are the
strengths;

WC: few comments -- accessibility issues we want to catch;
usability issues should be caught as people use WCAG;
interesting to look at those using tools -- a lot of people
use more than one tool; one thing doing with WCAG2 is moving
away from being HTML specific -- tests with SVG and SMIL?
One big concern is server-side generation -- make sure that
someone using a database to drive a site does it accessible
-- are the tools that allow that

MM: want to see if we can get subjects to self-identify
skill levels with certain technologies -- do they know what
should be done, but were prevented by a tool; new
technologies -- that's what I meant by designing from
scratch -- no need to go back to go forward (to SVG); skill
levels are built up

LS: think important if they are being asked to design a page
with accessibility in mind, if I were them I'd do a plain
unaesthetic way not using tools coming out with something
accessible, but that would not reflect a real life situation
-- have to insist on an amount of tool use and somehow mimic
the pressure and pressure for look and feel and bells and
whistles and gizmos; not sure how we can do that -- perhaps
can tell them that they are being tested on ability to build
corporate type web site -- ability to show off design skills

JW: very good proposal; important to recognize purpose of
testing is to determine how effectively people can read and
apply a set of GLs and associated documentation; think most
of the GLs should be built into software so not necessary to
refer to document directly, but work through authoring tool;
this type of test is an invaluable resource for improving
guidelines; WCAG has a number of constraints on it -- what
can be learned by testing discrete parts of WCAG2 --
modularized WCAG; interesting to see if could cover a range
of WCAG2 modules among those being used as subjects of the
study; will help evolution of the guidelines

GV: one other way to test is by looking at the tools that
build evaluation into them; a number of places where people
take WCAG and put them in the tool without phrasing from ER
and either massacre or slightly bend the rules -- which
rules don't people understand when they build that type of
device into an authoring tool

LS: another spin on the usability questions -- if have
scenario where given a task to do without having read WCAG
and then a site with the guidelines, what about asking
disabled users to test the usability/accessibility of the
test pages -- what did they prefer?  What worked?  What
didn't?  what info did they miss?  -- fulfilling the
guidelines to the letter doesn't always make the end user
happy -- test not only how easy it is to apply WECAG, but
how effective is the end result from an accessibility
viewpoint

WC: could look at usability testing of WCAG and of
techniques; database of techniques with some sort of
usability rating along with each technique -- test them
before recommending that people use them -- how formal the
process?  Need to separate from what DS and Helen Petrie are
doing; concerned about how we separate -- WCAG
understandable and usable by users of tools; testing of
tools themselves; authors and their needs -- main focus
should be that WCAG itself is usable -- would hope then that
an author or tool developer could use them -- is that a good
assumption?  Concerned that when add in all the variables of
tools, how we would deal with that

DS: think it is a very important piece of information -- in
proposal said wanted to test people in their own environment
(office, home) using the tools that they regularly use and
are comfortable using;

WC: makes me think that we aren't just defining GLs, but a
process -- some have commented that we need to do that
anyway -- are the GLs just a part of the process?  Can we
give people scope over use?

PB: more than one version of the guidelines to test -- if
test only one version, have good feedback on that version,
but with the current model of W3C documents they are long,
one page documents; my first encounter with WAI documents
was intimidating -- stinks of technicality; useful to have a
document that is more geared towards someone who doesn't
read technical documents, but just wants to know how to make
a site accessible; not suggesting that we disregard the W3C
format, but the idea of investigating another format is a
useful one

AR2: I'm concerned about the ecological validity of the
study -- beginners don't normally use paper documents; those
who are not professionals would never use a GL document;
unless rig an interactive interface to WCAG, won't have
ecological validity for users not familiar with underlying
technology; way to achieve is to limit study to
professionals used to reading specs and applying them --
those are the ones who will be using the actual guidelines,
in creating tools for others;

JW: comment subsumed with by AR2's last comment; assume a
certain level of knowledge of underlying technologies
necessary to provide detailed technical feedback;
appropriate to get comments from those without that type of
background -- are those using tools without detailed
knowledge of underlying tech can use the higher level
documents that form WCAG2 to create accessible content and
evaluate content for accessibility; need to be clear about
the audiences of each level of WCAG2; if tools are developed
during CR or PR, one could assess how usable the tools are
on populations that aren't likely to read or apply WCAG
directly; sharpen parameters

GJR1: minuted by WC

AR1: what PB said ; had a project were our developers tried
to follow WCAG -- he said we are doing this for people, not
for the W3C's sake" -- spec usability is important -- REALLY
important -- dealing with artists and designers -- get
nervous when you warn them about not doing this and doing
that in a specific manner; shouldn't be led to hat
accessibility/usability -- ways of communicating with them;
when take WCAG, you can put more into your creative process
--that's how we have to sell it to actual designers

WC: ideas?

AR1: lot of testing at our university whose contact info I
can exchange with DS; working with deaf students -- language
is sign language -- that is their mother tongue

// ACTION AR1: discuss extending studies to Finnish academic
and professional community //

MM: what PB and AR1 are saying underscores need for us to go
after those who are not experts -- need to get this message
out to as wide an audience as possible; can't just ID
professionals in technologies -- will lead to WCAG being
oriented to a small group of people; address the broadest
audience possible; plain English version of WCAG to simplify
the lower levels; for time being, we are cornering ourselves
by focusing on people we believe are creating these sites

WC: we need to be working very closely with AU WG so that as
move forward, can get these things in the tools, not just on
paper/screen/ears; get to point in CR where to exit, have to
show implementation experience -- part may be that our
techniques are being included in tools, both authoring and
evaluation and repair tools; in usability testing, would be
useful to have a tool like A-Prompt that helped them use
WCAG

CR: would be really helpful for the tool makers if the GLs
can be easily interpreted -- they need to say what you
intend them to say -- mark sure good set of technical GLS

GJR2: minuted by WC

CMN: one approach is to look at AU tool developers directly
as one of the target populations for usability testing of
the guidelines -- those are the people who interpret them,
produce the tools that are the way that most content will be
generated

WC: don't think that we're changing the levels -- have to
specify in the levels the information appropriate to that
level; main thing is ensuring that tools help people have to
ensure that tools are implementing WCAG in the best manner
possible

GJR3: minuted by WC

DS: written down a lot and will analyze the feedback; very
keen that we involve the sort of people who aren't technical
experts, but have the responsibility to look after the web
content (managers) -- are told must be accessible, but may
not know anything about either accessibility or underlying
technology

// WL leaves //
// 15 minute break commencing at 11:03 local time //

WC: summarize discussions from break

AR2: need end user requirement document that says that
people with such and such a disability need to be able to do
this, then this is what content they need, this is what UAs
need to expose this is what AU tools need to support; the
documents of the 3 WGs would be subordinate to end user
requirement document and would b organized on that basis

CR: ER tools need a document to guide them, so need
coordination between AU and ER to develop them in concert
with WCAG

AR1: discussion with PB about coordinating usability studies
as mentioned earlier

LS: had work related conversations, but not necessarily
about this topic -- page map, HTML techniques, AR2 had a
great idea about moving forward to address problems with
Semitic languages; still wondering how deaf dyslexics could
benefit from our work

MM: talked with CR about syncing up and the DTD that we are
developing for Techniques in WCAG2 -- could help
coordination among diff groups

DS: a lot of small talk and discussion of usability testing
and the extension of them ad sharing resources, knowledge
and expertise

WC: 43 minutes until lunch -- like to come to some
conclusions -- next steps; how we can give HP and DS
feedback on how to proceed; have a few issues that need to
be taken back to WG on list; 2 issues: 1) overlap with tools
in defining a process for how someone might use a document
in concert with other tools or on their own; 2) audiences:
experienced users, naive users, etc. -- based on DS'
proposal, -- who thinks we should proceed with proposal as
is?  Just proceed or are there changes we need to make

AR2: specifically the proposal we are voting on is?

WC: what DS presented today

WC: 3 words or less how you would modify DS' proposal

LS: suggestion: proposal as a stage map -- line by line of
the proposal which stages are involved, then people can
write in what is necessary to add

WC: whiteboard it?  Brainstorming session rather than
discussion

DS: trying to think how I would make such a map

[PB in charge of whiteboard]

WC: information gathering on how people -- discover current
state of people's knowledge of WCAG;

DS: people responsible for putting up web content or telling
others to do so and web development professionals -- testing
their knowledge of accessibility issues -- how PWDs access
the web, importance of accessibility to an organization or
company; legal requirements; and their perceived effort in
making things accessible -- what resources do they think are
required to create accessible resources -- don't know if
this type of info has ever been gathered before; attitudes
and knowledge about accessibility

GV: like to see some expansion of coverage of issue around
how experts evaluation fits in -- easy to get off track in
that area; other part is that it wasn't clear from
discussion exactly how going to separate -- good thing to
get straight up front -- measure twice and cut once; most
people using tools today -- unless you know exactly why they
didn't do something is because they didn't think to look
under the advanced or annotations button to add annotation
to a graphic using the tool familiar to them -- missing: 1)
have to have a better method for handling and finding out
why didn't do something -- that feedback should be obtained
instantaneously to capture why what was done was done

WC: GV's comment is on the second part of the methodology --
something to highlight; not making firm decisions today --
just a sample population of the WG -- need more discussion
from the WG after a chance to read proposal--next steps: put
on agenda for next week's telecon and have more discussion
on the list; today I'd like to quickly gather thoughts
people having right now, capture them and then discuss them
on list and at next week's telecon; comment quickly on both
parts of process

PB: helpful to see all stages and points in proposal so as
to organize my own thoughts -- at least the main points

WC: need to spend more time with proposal

DS: sent to list this morning -- will resend corrected
version to list shortly

WC: just get initial reactions today

JW: like to say that I've reviewed the summary of the
proposal--agree with GV's comments; can we clarify issues
around tool usage and making sure that populations of web
content developers will be ok to proceed along lines already
set forth; should also work out what the resource
constraints are and where we can fit that discussion into
our process -- need to wait until full report available to
asses usability; run study when WCFAG2 ready to go to CR

// OPEN ISSUES ARISING FROM DS' PRESENTATION:

  1.   tools -- how are we going to do usability testing on
     the guidelines with or without tools -- how to separate?

  2.   timelines -- where in the process does usability
     testing occur and how does that fit in with going to CR?

WC: any other issues we need to take to WG?

AR1: is it possible to test students in concert with
developers

DS: students of web design or students who use web content
to communicate their ideas; students who didn't come from
computing background but who happened to write web pages
currently

WC: general issue -- identify audience -- groups being
tested an open issue?

AR1: yes, many types of students -- do they really get a
grasp of what accessibility is -- what are they doing a year
from the first exposure to WCAG

// GJR's machine goes wonky/CMN picks up the slack //
// GJR resumes minuting //

WC: process -- timelines;

DS: six month project -- that's flexible; haven't approached
any potential participants, plan is to submit research
proposal for funding -- may take a bit of time; no rush as
of yes -- happy to wait for a through discussion with GL WG;
six month time frame could be adjusted

WC: six months from when funding comes through?

DS: yes

WC: info gathered going to be usable across WAI domain;
talked about iterative process where we would give you an
initial working draft, get feedback from you, then make
changes

DS: proposal could be done with a small pilot group with
some of the limitations pointed out this morning (all from
UK, all with similar experiences, similar institutions) --
smaller group running through each iteration to see if
viable and contains what they are looking for, and then feed
that back to the WG; could be a pilot study with initial
feedback --that's possible

WC: if could have separate usability studies going on in
Asian counties, as well as Israel, US, elsewhere -- in
general do you have feedback about other groups with whom we
could coordinate testing

LS: could hook you up with Hebrew testers

// AR2 has already suggested contacts //

WC: will be translating documents, but not during the
development of them

LS: can limit pilot to English speaking developers
developing sites in non-English natural languages

WC: LS might be able to do informal studies; perhaps we can
outline a streamlined process to give to people around the
world

DS: a spin-off of our deliverables -- developing a
methodology

AR1: have some really useful contact info [will give to DS]

PB: idea of breaking it down into 3 audiences of web
developers, AU tool developers, and ER tool developers would
be very valuable

// OPEN ISSUE: audiences and subjects //

LS: can take whatever feedback we can get -- what we do with
feedback is up to us; if have feedback, can only help us;
are we showing to people who aren't intended to read that
level anyway -- feedback can only help; need to document the
assumptions or framework that each "testee" comes with --
this is not someone who normally reads technical material,
not someone who directly does development (of content or
tools)

GV: usually what happens is that you discover things and
share them at any time; have to have all diff users use it
to discover if it is a good idea; usability feedback is
valuable even if testing performed on small groups

KHS: ongoing process -- not something that's ever "done"

WC: need with each public WD -- every 3 months to see if
changes made since last WD work; if resources of usability
study takes 6 months, and we are talking about putting out a
WD every 3 months, so having more tests would be good

KHS: Steven Pemberton's people -- good to have testing done
in house as well

WC: series of open issues to take back to WG; proposal to
read and review; what happens at this point is DS and HP
move forward on funding front, while WG works on providing
feedback; earliest discussion at next week's telecon --
probably won't hear from us for a few weeks

DS: in terms of funding, could there be a contribution from
W3C or have to fund entirely by ourselves

WC: have to discuss with Judy Brewer

DS: not asking for a huge amount

KHS: new usability group under S Pemberton may have funding
resources

WC: Steve Pemberton -- one of our hosts and chair of HTML WG
-- has proposed that W3C start usability group to look at
usability issues of the web -- as we are looking at
accessibility issues on the web

KHS: his email mentioned funding

WC: talk with him at dinner tonight

AR2: accessibility is usually the prow -- leading edge -- of
usability and productivity issues

WC: that's a whole `nother kettle'o'fish which I don't want
to talk about now

MM: usability IG will be usability of specifications; web
usability for users --  what's more important to that WG

WC: no--not usability of W3C content --that's Quality
Assurance (QA) -- DD is co-lead of activity with Karl Dubost

MM: how to make authors use specifications and the language,
versus how to help users use sites constructed using specs
and markup languages

WC: activity similar to what we are doing with accessibility

MM: still a line -- who the end product is for

WC: QA looking at usability, accessibility, DI, and
internationalization of W3C documents

DD: W3C Publication Rules being extended to include more
quality work, such as readability, usability of spec,
inclusion of tutorial, test suites, etc. -- usability of a
spec, closest thing we have is UA work in WAI -- don't have
a usability WG, which is what SP proposed

WC: is an IG, but yes

// the WG thanks DS //

// BREAK FOR LUNCH at 12:24 local time //

// CR takes a picture of the WG members in attendance -- a
highly naturalistic pose, not at all staged //

// GJR shuts up and starts minuting //
// that last comment was for your benefit, Jan! //

// GV and JW rejoin by phone //

WC: next on the agenda: hour and a half to discuss
testability of checkpoints -- in requirements document

  WCAG2 Requirements:
  http://www.w3.org/WAI/GL/wcag20-requirements

  URI of draft being discussed: 28 March 2001
  http://www.w3.org/WAI/GL/WCAG20/WD-WCAG20-20010328.html

WC: Checkpoint 2.4 --

JW: useful to have a format -- separate question than
separating minimal requirement from something more than a
min requirement; format to be followed in document and
distinguishing between minimums and more advanced
possibilities; part of overall document?  Useful to do first
-- make distinction between minimal and more detailed or
advanced implementations; will also want to maintain some
sort of format so that information is structured

KHS: good idea because separating it into chunks is good not
only for back end but makes it easier for people to use

WC: chunk but sub-chunk of a bigger chunk

JW: if did in XML, could have presented with or without
labels

WC: easier extraction -- if just wasn't rationale, could
just get rationale

JW: think it is time we do it

WC: hard finding minimum requirements

JW: for 1.1 we could say that the existence of a text
equivalent which is related in markup or data model to that
which it is an equivalent -- giving the same functionality
or communicating the same information as the visual or
auditory content -- could specify minimum conformance
criteria for each checkpoint

LS: way to avoid this becoming obtuse is to put minimum
requirements in technology specific techniques doc, so that
has a practical application -- that's what minimum
requirements are

GV: minimum requirements sound like P1 to me; saying that
you have to have something in ALT is the essential part
meaningful and useful is the second part; worried that we
don't get to choosing what is easily testable and what is
important

JW: not a solution but something that might be done

WC: where would minimum requirements go -- checkpoint or
checkpoint solution level; minimum requirements are minimum
requirements to satisfy a checkpoint regardless of priority
-- minimum thing you need to do to satisfy a checkpoint; how
to we state that something is easily testable

GV: could creep in if we aren't careful in formulation

WC: how can we state checkpoints in a way that makes them
easy to test -- been restating them to make them easier to
understand, but does that also make it easier or harder to
test

LS: first, the process is technique specific/technology
specific test cases -- have a QA person who sets up test
cases -- can volunteer that person to establish test
scenarios -- should be done with black box testing --
regular paraphernalia of usability testing; should be
designed for a QA department -- follow same paradigms that
QA department does -- they do site testing, so let's give
them accessibility criteria; most big sites have site
testing

WC: big sites

LS; but if had paradigm on plate, they could use it

WC: integrate with WCAG?

LS: could have basic requirements which we could hand over
to the A people and link to it

WC: ideally, as part of WCAG need to ensure that people can
use tit to see if they have passed it -- having something
separate doesn't make WCAG better

// ACTION LS: propose a QA activity based on discussion //

CMN: to make easy to see if piece of content meets a
checkpoint, provide code examples that say "this is how you
should code" or provide a functional practice/illustration;
in technology specific work, that's what you want to do --
in HTML use the ALT attribute -- it is possible to describe
the graphic in the text, but that is not a good practice
because..." -- wooly phrasing, but need a "if I use this
document, did I create accessible content"; will be examples
at technology level where we say "use this type of code
construct" at tech independent level, quite often you won't
0--provide functional description of what needs to be
achieved

WC: both levels?

CMN: yes

CR: boils down to a level of detail -- more detail than you
want to get into in a document like this, but to test, you
have to be very precise

AR2: need to have at the highest level of conformance,
something that is ecologically valid -- actual testing with
a human subject who has the disability for which we are
attempting to find solutions; if tie to technology, you're
stuck -- that's what happened with WCAG1 with client-side
image maps -- better technology became available, but still
have requirement use client-side image maps even though
something better superseded it; with technology at time of
publication of these guidelines this is the recommendation
for this technology, but real standard is will a PWD be able
to use page

CMN: in making this thing usable generally, that always has
to be our guide -- are we making this stuff accessible to
PWDs  -- have to be careful that what we are recommending is
in fact beneficial; not all testers will have access to
range of people with disabilities who will have access to
those people's content; what would happen if you were
disabled -- using a tool like the WAVE -- so that authors
can perceive how PWDs perceive and interact with their
products; take knowledge gained from testing so that the
next person down the line who can't do as much testing, is
still assured of the quality of WCAG -- distilling for them
the user testing

LS: probably be allowed to go to the National Business and
Disability Council might allow us to go in and perform
testing there -- have almost every device available; if work
out a QA paradigm that isn't technology specific but another
that is non-technical, we might be able to do proper testing
there for the paradigms that have been defined -- if WAI/W3C
can join in, that would be good

AR2: human subject testing should be available as an
alternative way to qualify a site or a document -- if
someone finds a way to make a site accessible with
technology that wasn't available at the time, should be able
to claim conformance on basis of human testing

MM: only concern is that people who are seeking compliance
with respect to legal requirements especially, would be
creating things they believe satisfy WCAG, but which are
inaccessible for other reasons -- legacy browser support,
older computers, etc. -- having technologies that are proven
to be accessible is good, but WAI needs to be on top of
that, not delegating it to the general public

AR2: good idea -- if have alternative way to qualify a site,
application, may need to have a filter at the W3C level so
that it cannot be abused; could have some kind of volunteer
organization in open source community style that would be
available as a board of referees or jury

WC: not certifying anyone -- giving guidance to people who
are developing sites, responsible for overseeing a site, --
how can we give the best guidance to people

AR2: what is necessary in the doc is to say head-first this
is how you will test it with available technologies but
ultimate goal is to make accessible to PWDs -- if you can
show to us

GJR3: minuted by WC

CMN: difficulty I have with end user testing as
justification is that it is extremely fallible -- example of
a blind guy who demonstrated a site, but didn't show 2 or 3
major features, because he had no idea they were there;
small group with limited testing service/skills/capacity is
going to make mistakes; board of testers is a helpful
service, but not rigorous enough to provide the information;
we need a lot of end user testing to ensure that we've
encounter all of the situations

MM: ultimately, the idea of an ombudsman wouldn't be for an
org to buy indulgences, but to contribute to the form of the
techniques document which are living documents -- need to
work on ways of getting g people to publicly claim when they
claim they have come up with a better mousetrap

AR1: problems with testing with deaf people is that there's
no compact minority of deaf people having the same aims --
we are all individuals; should have end user testing, but
has to be properly planned;

WC: not saying usability testing with end user is bad idea
at all and should encourage more in document, but have to
give guidance to people so that they can determine if they
have met WCAG; developers of sites, developers of site
construction tools, and evaluators

JW: question you originally raised was do we want to
distinguish between minimal and advanced requirements for
each checkpoint and that question hasn't yet been addressed;
that's the issue you raised, and that's what we should
suggest; LS at technology specific level, CMN at both levels
-- have to decide if this is an appropriate distinction to
make and clarify surrounding issues

LS: belongs in techniques level, but could also live in top
level as well; end user testing can be a practical
requirement -- though I'm not a QA person myself, from what
I understand or have absorbed via osmosis is to create a
paradigm that ensures that each route can be traversed by
people with all types of disabilities; black box paradigm
needs to be followed so that situation CMN described about
blind demonstrator isn't repeated; wouldn't certify them,

JW: not talking about testing now, but whether or not we
want minimum and advanced requirements for each checkpoint

LS: take consistent navigation -- if have testing paradigm
that says ensure that someone from the following groups can
traverse all of your site, that to me is a minimal
requirement

JW: that presupposes that it is going to be tested with
limited groups of users;

AR2: do see this as a matter of level -- at highest level
should say that any technology specific tests are just proxy
tests -- if can be shown that site is truly accessible, than
using technology specific techniques could be bypassed

WC: last checkpoint solution is "here's what I did, here's
-- would put at the front -- first of every technique; one
example of human testing doesn't meet any of the
psychological design criteria that are used in testing;
technology based testing leads to inferior products, where
ecological testing would have been the better more
efficacious route

WC: we understand your point

JR: wondering if the best place to test minimal requirements
is at whatever level you decide to put priority levels,
whenever you get around to that

MM: have a structure of priorities in existence -- most
prudent way to go about this, when apply priority levels,
apply them to checkpoint solutions, then derive priorities
for checkpoint level based on an extrapolation of that

WC: priorities are what you focus on

MM: priorities are "less than impossible" "this makes it
better" this isn't necessary", those mean that P1 is a
minimum requirement

WC: can have minimum requirement for each checkpoint despite
priority level -- P1 is not the same as the minimum
requirement we are discussing right now

CMN: requirement as used in ATAG and UAAG is "the minimum
you have to do to satisfy this checkpoint" -- there
precisely for testing purposes; need a baseline that people
can use to say "I've reached the goal" -- prefer to have
functional requirement for each checkpoint -- for technology
specific things, code is better; where I'd like to see the
strong emphasis on user testing is the "How to Use This
Document" section -- test, test, and test again; do tests
with users as minimum requirement is kind of silly -- if it
works with a specific technology it might not work for
others using similar, but different technology; testing is
fairly complicated -- most people who will be using this
document won't be experts or even competent testers

// WL rejoins //

GJR4: minuted by WC

AR1: question of priorities is still open -- testers in
Finland need a minimal requirement; need technology specific
resources

JW: issues: 1. value in specifying very clearly what is
needed to satisfy a checkpoint -- not convinced that the
idea of a minimal conformance requirement and a less minimal
requirement is going to work without -- take 1.1.1 --
conformance requirement would have 3 parts: textual equiv
for non-text content, 2. related to the non-text content for
which it is the equiv; text equiv contains same information
or performs the same function as the non-textual original --
tripartite criteria -- if you meet those 3, you've met the
requirement; take 2.2.4 -- suggestion that there be a
minimum of a 10 second delay before any reaction from user
as an absolute threshold; varies from checkpoint to
checkpoint; go back to 1.1.1 in the case of complex
audio/graphical content give it a label, less minimal is
provide a description or caption that conveys the same info
(LONGDESC); can't actually provide caption s that convey
same meaning as a Beethoven symphony -- not totally sure can
apply minimas across all checkpoints -- is this a feasible
approach?  If so, need to assign some action items

WC: there are a couple of layers: guidelines, checkpoints
and CP solutions -- difficult to define minimums for
checkpoints, because they are expressions of minimal
requirements, but might want o ad a functional
requirement[WC explains functional requirements by walking
through some checkpoints]; at checkpoint solution level,
those might be the minimal requirements for the checkpoint
level

JW: had an issue to write rationale for everything -- this
is one way of doing it -- what and why in a couple of
sentences; not satisfied that all of the conformance
requirements or criteria that determine if CP has been
satisfied belong at the checkpoint solution or technique
level; could specify some of them at the general level as I
did previously with my tripartite requirement statement for
1.1.1

PB: true that some checkpoints don't have minimums;
introduces another level of complexity -- we have GLs,
rationale/functional requirement, priority levels, cascading
level of granularity -- all have benefits for good reason,
but add to complexity -- as we are looking through ATAG I
can understand why they are there , but it had to be
explained to me, and if that is the case, that is
problematic -- potential confusion between minimum
requirement, priority levels easily confusing set of
conflicts

LS: have 2 potential proposals to get around these problems
-- my problem is that when people see minimal requirements,
they see this is all I have to do and I don't need to bother
with the rest" -- no matter how many times we say this is a
worst case scenario; example JW gave where text equivalent
is merely a label, undermines capture functionality in text
equivalent -- may have lost where it wasn't necessary 2 ways
around: "worst case implementation" instead of "minimum" and
a "typical" which is what we consider idea -- there is no
best case -- that's the one we haven't thought of yet; other
suggestion is to provide this as a separate document so that
we've removed the complexity --  that would remove PB's
problem, keeping the mandate to state it clearly, but it is
a trouble shooting document -- no matter what you do you
can't get single-A, so here are some minimal solution s for
you

CMN: sensitive to the comment about increasing complexity;
ATAG is a challenge -- to use it, you need to know it and
WCAG; in ATAG, we have a summed up version of all the pieces
-- look at checkpoint, get idea of what asks for says why in
a sentence, the least acceptable work in a sentence (agree
that there is a risk that people will say "oh, I only have
to do the minimum"); having minima helps because sometimes
people push for a low minimum, but sometimes they push for a
high one, so it is good to put them on the table to get
consensus; 2 types of techniques -- implementation techniques
and a new document containing "Evaluation Techniques" -- how
can I evaluate and authoring tool to see if it meets the
requirements; minimum requirements stuff is the BARE minimum
piece of work you can do -- answers the question "how do I
know I've satisfied the checkpoint"; we will never ever write
GLs that people understand first-off -- have to give them a
few bites of the cherry;

GJR5: minuted by WC

GB: when we look at WCAG as a whole and try to establish
minima, in some items we may say in this situation you need
to do this, and the minimum you need to do is this, which
will make it more useable; that which is the minimum you
must do is that which you must do to satisfy the guidelines
as a whole shouldn't be any part of a P1 guideline that
isn't essential; if that is what we are supposed to be
doing, why are we having this discussion -- people must
think that there is something that were are missing -- think
that what you are looking at is sufficiency which is
different from a minimum -- do the checkpoint -- there are
many ways to meet it, and the following are sufficient, and
here are some other things you can do -- if you did the
following things, you would satisfy this checkpoint -- can
be tricky; need to word original guidelines so that they are
clear -- should be automatically testable

JW: GV already said part of what I wanted to say --
sufficiency is the concept, not minima -- the tripartite
criteria are satisfied for 1.1.1 that is sufficient, but
more could be done; we should go through the guidelines to
see if a sufficiency requirement would work -- one solution
would be to divide up sufficiency criteria -- maybe we need
to assign an action item to go through

WL: you just volunteered for the action item

JW; well, not really, but if you want to volunteer, William

WL: priority level is the equivalent of what you just said,
so I think the work has already been done

// 30 MINUTE BREAK //

WC: only have an hour -- want to discuss some topics while
JR and CR are still here; outstanding issues from previous
topics will be dealt with by me

// ACTION WC: organize open issues and fruits of discussion
into cogent form and post to list //

// WC discusses whether WL and GV will stay on the phone //

// WC comes clean -- we want to be outside //
// GV gives the plan his blessing //

*** BREAKOUT GROUPS ***
  HTML -- leader MM
    WC, CR, KHS, PB

  Multimedia -- leader CMN
    JR, AR2, AR1, GJR, LS

// refer to CMN's minutes of multimedia breakout session,
along with GJR's partial/supplemental notes from the
breakout session //

Received on Thursday, 19 July 2001 13:54:32 UTC