Minutes 23 August 1999

Here's the notes for our last telecon.  Much thanks to Gregory!


dWAI ER-IG/WG Teleconference
23 August 1999

  chair: Len Kasday
  scribe: Gregory J. Rosmaita
  line: MIT Bridge: +1 617 258 7910
  time: 10 AM (Boston time)

  HB: Harvey Bingham, Yuri Rubinsky Insight Foundation
  MC: Michael Cooper, CAST
  DD: Daniel Dardailler, W3C
  LK: Len Kasday, Temple University
  WL: William Loughburough, Smith-Kettleweil
  BM: Brian Matheny, CAST
  CR: Chris Ridpath, UToronto
  GJR: Gregory J. Rosmaita, VICUG NYC

Action Items
DD: to check charter modification process
New Work Item:
GJR and LRK: Specification for Repair tools. 

1. Charter Review
  A. Scope of ER-IG
  B. Deliverables
2. Discussion of the latest version of the ER Techniques Document (ERT)
  A. ERT WD: <http://www.w3.org/WAI/ER/IG/ert.html>
  B. ER mail archive:
3. Abbreviations Used:
  ERT: Evaluation & Repair Techniques Document
  AT: assistive technology

1. Discussion of future deliverables and timetable
2. Continued discussion of ER Techniques document.
3. Scheduling and Timing of Future Meetings.

1. Preliminary Discussion

GJR/HB: brief discussion of testing of UA guidelines using
AT in conjunction with "mainstream" browsers (i.e. MSIE,
Opera, Netscape, Lynx) and using specialized browsers
(PWWebSpeak, HPR, etc.)

LK: results will be useful for ER work -- not to mention
anyone writing a web page

2. Start of Meeting

LK: the first thing I'd like to do is review what we are
currently doing in general and how it compares with the
charter; there are some things in the charter which we are
not currently doing -- we either need to plan to do them or
eliminate them from the charter; don't know how many on the
call have had the chance to read through the charter;
received comments from HB

DD: since majority of comments were about syntax and
grammar, I've updated the charter document in light of HB's

LK: good; ok, here are the major things that are in the
charter, but which we are not currently doing

1. methodology: currently, we're working in the same manner
as other WAI working groups --getting together via telephone
to make decisions, using the list for discussion and
proposals; getting informal input from specifically
specified people, input from list; had item in original
charter to have more formal experimentation where would have
users judge 2 or 3 different versions of a page; intent or
spirit of that item was to set up for controlled testing of
tools and identification of clearer identification of what
poses problems -- CR doing that with color test, but as a
group, we haven't been doing any of that with web filtering
tools, for example. what are people's feelings?

WL: do you want to formalize the methodology ER uses?

LK: yes -- if we have a filtering tool, for example, that
does something to a page like decolumnizes it, we would have
2 versions of what the tool might do--evaluate which is best
means of accomplishing task;

GJR: do you mean that we'd have competing versions of a
single tool or filter from what perspective -- 2 different
interfaces, or 2 different solutions/implementations?

LK: would depend, but, in general, the former -- 2 different
interfaces which accomplish the same thing -- the eval tool
would have comparison screens, we'd then get formal feedback
to see which people prefer

WL: how many responses so far to CR's report form?

CR: 250 to 300 responses

DD: I'm trying to understand how this will impact the work
that we are doing on the Techniques for Eval document?

LK: well, we've had some discussion over the format of the
output; we have general statements of what we want--if we
were to go the way WAI working groups normally go, would
have a general document, CR would create a version of the
tool to implement the document, and then we would get
feedback on it -- and on Bobby, as well; the alternative is
to have specific ideas about what output should look like;
we could then markup a couple of pages, showing what output
would look like, and then ask people "which do you think is
better?"; could do by "brute force" by hand on pages, but to
have true test, need working version of a test site or an
existing site -- CR: with color experiment, is data being
collected to database or yanked out by hand

CR: every time someone finishes test, stored here results
stored in a file at the site in Excel spread sheet; results
emailed to Tamara White, researcher who is analyzing data

// BM joins

LK: responses automatically converted to Excel spreadsheet
form and added to spreadsheet?

CR: correct

LK: what kind of software are you using to do the
conversion? is it a general purpose thing?

CR: actually, just a cut and paste

LK: you mean a human takes data and puts into spreadsheet?

CR: right; we got a lot more responses than we thought we
might, so we had an intern do cut-and-paste

LK: at any rate, what do people think about using a similar
scenario to evaluate the output of an eval tool?

CR: a lot of work to set up experiment, run it and analyze
it; simpler to do user testing; if can get 6 or 12 users and
let them use software, can often give better results a lot

LK: so, instead of simulating in advance, come up with
software, get input, and redesign as necessary?

CR: right -- concentrate on testing, rather than

MC: experimentation is a valuable thing; because ultimately,
you want actual user data, where the issues are complicated,
experimentation gives a developer specific directions in
which to work, so that a beta product can be created that
incorporates such suggestions; so, I guess what I'm saying
is that I can see the need both for experimentation before
going to beta, and for going straight to beta and then fine-
tuning or overhauling, if need be -- so, I think we might
want to put in some sort of rider that lets us abandon the
outlined methodology for some issues

WL: if have 2 versions, side-by-side, experiment would have
to have all or part of it presented and compared--doesn't
seem feasible

MC: experimenting out-of-context of tool is an empirical
approach we should try to use when we can

LK: are there any other examples?  we have one: the color
test -- any other case where experimentation might be useful

DD: still trying to understand general value of that
methodology; providing solution and asking for comments is
usual W3C methodology --  how does this all relate to the
charter discussion?

LK: charter explicitly states that there will be experiments
in Section 2.1.3, ordered list items 2 and 3

DD: ok, let me see -- in the case where good to do that,
asking for feedback on specific things, ok -- I think that
is generally what we are doing with the report tool: develop
a beta, circulate a request to test it to the WAI-IG, look
at feedback, fine-tune the tool, and try again --
personally, I don't want to provide 2 or 10 versions of a
tool -- just one, which can then be refined; to me, not a
central part of the charter -- if we have the opportunity,
fine, but we should be gathering information about what the
tool should do, and making decisions how it should do it
before we build a beta and ask for input

LK: OK, so we can leave experimentation in the charter, but
in our own minds, and possibly if rewrite or revise charter,
we can note that it will be done as needed

DD: maybe we could have something to the effect "methodology
is to produce a tool that we circulate to IG and other
users, keep track of user feedback, that is then plowed back
into tool" -- with the reporting tool, the reports sent to
me, I'm tracking comments, and bringing them back to the
group working on the tool for implementation

LK: I know that Judy just put out her call a few days ago,
but how many comments have you received so far?

DD: 2 or 3

LK: should we put together a note on the page giving a
paragraph to explain methodology?

WL: seems self-explanatory to me; doesn't need more work

LK: anyone who feels should do anything other than leave-as-

DD: charter officially started June 98 for 2 years duration;
wondering if it would be OK to update charter in the middle
of that duration period -- while we are at it, we should
check for other things that need to be fixed -- for
instance, we refer to the Page Author Guidelines, which, of
course, are now the Web Content Accessibility Guidelines,
but those are minor updates; what we need to do is to
identify the important issues, such as, should we give more
detail on report tool evaluation techniques?  -- in any
event, I need to check with W3C process gurus to see what
implications there are to changing the charter, without
major revisions, in the middle of the stated duration; if we
can update the charter in middle of the WG's life without
too much trouble, we could use the opportunity to clarify
this WG's methodology; as well as better define the overall
ER WG activity -- the ER WG is intended to be comprised of
implementers, experts, and developers; as it functions now,
the ER WG is not formally a group --  just a few people
coordinating loosely; there are people implementing things,
but not functioning as a W3C working group; whenever common
ground for WG to meet, occurs under IG umbrella, which Is
evaluation, not development; WG part of ER needs to
reorganize to be more structured, rather than loose
collaboration of members of IG; that should be the focus of
charter revision discussion -- to detail the activities of
ER-WG, because we know more now than we did in June '98 what
we want to do; part of this needs to be done through
discussion with WAI CG, part is conformance with relevant
W3C process rules

LK: ok -- bottom line is there isn't any groundswell to
change stated methodology or way we are actually doing
Action Item:
DD will find out from process point-of-view how big
a deal it is to change things, then can decide whether it is
worth it or not

DD: help from other WGs may determine how we work -- ER-WG
may be in charge of unifying tools

LK: if having formal group under W3C working on tool, then
will be need to have more formal process

DD: yes, but for now, I have made minor modifications to the
charter, as per HB's comments, since they mostly had to do
with grammar and syntax

LK: next charter item: section 8.2 has a section called
relation to external groups: e.g. groups of users, formally
recognized groups, such as AFB and United Cerebral Palsy;
web designer and developer groups,  etc.

HB: did you advertise color tool on any such group -- did
you submit a request to respond to it to low vision groups?

CR: put out a request review on a number of targeted

HB: so you're not just depending on WAI-people to respond?

CR: no, trying to get as many as possible to test it

LK: have you formally approached an identified group?  have
you approached contact persons for such groups?

CR: nothing formal

LK: not suggesting that you need to, or should -- just an
idea -- seems that we are not doing what the charter says;
not going formally to groups (site developers, tool vendors,

WL: request probably reached Kynn Bartlett of HWG by one
channel or another

LK: right, but we haven't formally said "OK. HWG -- please
test this at your sites" -- haven't formally gone to groups
and said "will your group participate with us in such-and-
such a study" and make it an HWG activity; not saying we
should, but working with external groups in such a manner is
in the charter -- is this something we should be doing more
of; focusing efforts on it, or no?

CR: the more input, the better

GJR: first thing we need to do, and maybe this is more of an
EO or CG thing, is to identify specifically which groups we
should approach; I know that the American Foundation for the
Blind (AFB) has a Career Technology Database, which is
comprised of user profiles, which include such information
as: what screen reader do you use, what refreshable braille
display, what screen magnification program, do you have
access to the internet, what do you spend most of your time
doing online, do you surf the web, what browser or browsers
do you use, etc.; from time to time, the database is used
for specific targeting -- finding, for example, a pool of
low vision users using screen magnifier X in conjunction
with browser Y -- for evaluation purposes, as well as for
targeted testing of specific items; I also know that other
organizations have similar projects for their target group
or groups -- I suggest that, as a joint ER-EO sort of thing
that we ID which groups which maintain such databases, and
approach them to ascertain if they would be willing to
canvass the people in the database to find out if they would
be willing to test eval tools and sites; can get a very
specific pool of testers in this manner, whose feedback
could be plowed back into the ERT, to individual tool
developers, feed to other WAI WGs, etc.; it would also be a
way of more closely tying in groups like AFB and UCP to the
WAI's work

LK: has EO been doing much of this?

GJR: not that I am aware of, but then, my participation in
the EO hasn't been at a very high level for a few months --
mostly just monitoring the list and feeding suggestions to
people who I know will be on the EO calls

WL: Judy sort of does it

GJR: yes, but right now, such approaches are made solely at
the chair's discretion, right?

HB: yes--as a group, EO hasn't requested specific help or
testing from outside organizations which collect such data,
but we probably should

LK: what sort of things would we want to ask these groups?

WL: ask Judy the question in the next WAI CG meeting -- who
do we ask, how do we do it, who should do it, etc

LK: ok, but the question I'm posing is: if we could go to a
number of groups, what would we want to ask them to do?

WL: take color test; evaluate the reporting form

LK: anywhere we ask for feedback on IG, we would also ask
for feedback from identified groups; CR do you need anymore
participants for the color test?

CR: going to run again with different focus; don't need more

LK: what about in terms of browsers; survey of browsers and
AT for UA WG -- do we have any idea of what screen readers
are being used with what browsers, etc.?

GJR: no, but such info could be quickly obtained from the
AFB Career Technology database, which is sometimes used to
organize and execute focused studies; could, as CR has done,
target specific narrowly focused listservs to canvass for

LK: OK, so we'd start with a list of questions to ask
organizations, can go to them in coordination with EO to get
info; what about the other types of groups listed in section
8.2?  web writers' guilds, site developers, tool
manufacturers, etc.

WL: is Rob Neff part of the ER-IG?

LK: is he part of an organization of federal webmasters?

WL: yes

GJR: not sure if his position within such an org is one of
spiritual leadership, or whether he is an official

LK: are there any other people we should recruit?

WL: look on WAI-IG -- anyone asking for advice to make site
Triple-A compliant could and should be recruited

LK: ok, let's wrap up this portion of the discussion by
saying we will push to do what is in Section 8 of the
charter more; the final major issue in terms of the Charter
is that the milestones listed are obsolete

HB: charter should also have date on it

LK: has date at end

HB: should have "document last revised date" as well as
"date effective"

LK: want to leave some time to discuss major questions going
on regarding ERT document, but before, are there any more
comments about charter?  -- oh, I have one, which I outlined
in an email I sent out a week or so ago focusing on
additional work items on which we should be focusing, and
other issues we should be addressing -- in particular,
filtering tools

[scribe's note: LK's message is archived at:
end scribe's note]

LK: ERT has a section on filtering, but that is not a main
emphasis of the document -- my question to the group is: do
we want a separate document on filtering techniques? taking
bulleted lists with icons and transforming to actual
bulleted lists?  CR would you like those folded into ERT
doc, or have separate so as not to hold up ERT?

CR: prefer to have separate -- could fold in afterward

DD: should adopt some sort of XML syntax to be at source of
eval technique document, can then use it to free-port to
other documents -- such as, techniques without UI issues,
with or without filtering; with or without repairing; etc.
-- source should be own XML DTD, as extensible as we wish;
one question is CAST XML doc and the ERT -- how close are we
to merging them?

MC: our XML doc isn't in a format that will be useful to ER
-- it works so that it is split into HTML files that you get
when click on title of error in error report; the XML is
tied into binary code used in BOBBY itself; you guys would
probably be better served if you start from scratch

DD: is the XML you use the source of data or generated by

MC: essentially, the source of data

DD: if end up with own XML file, still would be beneficial
to use BOBBY model, so you could take our XML piece and use
it -- add your own private info and take out what isn't

MC: that's a possibility, but we would have to be involved
in design of XML structure so can incorporate easily; is the
intent of ERT to be advisory or folded into actual tools?

DD: advisory -- techniques; ways to implement different ER

MC: we might be going in right direction -- CR and I have
talked about making A-PROMPT and BOBBY working in same way;
XML DTD would be useful then

DD: are you going to take what we come up with in ERT and
build into tool?  what are the current intersections?

MC: well, we are, to a certain extent, redeveloping document
structure in conjunction with the evolution of ERT; but the
question should be: "what intersections do we want?"

LK: right now, document is just text; some GLs should have
semantics in it; automatic tests could be put into XML or
RDF -- were no auto test, can't put in; is there a use to
coming up with formal methodization scheme; expressing
algorithm in a formal way

MC: would have to look into that and talk with programmers -
- I've thought about incorporating algorithms into XML, but
don't know where is advantage for having them coded into the
XML, as we already have them coded in JAVA

LK: pattern matching language

MC: would have to write interpreter

HB: oh, no -- performance of JAVA bad enough already!

MC: [laughs] putting some algorithms in, such as for ALT-
text, would be ok, but when issues get more complicated, I
would like to see an English language explanation of the
algorithm -- let implementers implement it in own way

LK: getting close to end of hour; filtering tool as separate
document; who is interested in creating that document?

WL: this is a UA thing

GJR: not really -- it is an evaluation thing first; before
they are willing to implement a filtering mechanism or an
alternative view or conversion mechanism, developers are
always asking for empirical evidence which will justify the
necessary programming-- working on filters would allow us to
say to the developers, using these users, who meet this
criterion, we tested this filter on this page using the
following algorithms/rule-base, and received the following
results -- and then we give them both the quote hard unquote
evidence they keep asking for in the form of data collected
by user interaction with a specific site in conjunction with
specific tools and specific filters, as well as anecdotal
evidence in the form of answers to follow-up questionnaires
and feedback

LK: right -- would eventually be a UA thing, but is an
intermediary thing; filters would do what GJR does by hand
when reformats pages and search engine forms; the point is
that what we would be doing with filters is not part of UA
software -- it could be done by a proxy sitting between a
real user agent and a real web page -- optimally, if we can
refine the filter so that it works across platforms, we
would like to see it rolled into UA

WL: true, too, of supplemental browsing functions of some AT
-- decolumization, etc

LK: that's just a special case of filtering -- right now
there are certain user agents that do decolumization --
Lynx, for example, and JFW by running a script when MSIE is
running -- what I would like is to have a proxy in the
middle that runs anywhere on any platform for any UA; as far
as ER concerned, we are concerned when things aren't part of
final tool, but are only a proxy or a plug-in or a powertoy

GJR: gather empirical evidence by applying different
versions of a proxy or filter to test pages or pages known
to cause problems either for AT or for low-bandwidth slash
text-only access, gather empirical evidence by tracking
users as they move through the test site as well as
anecdotal evidence afterward

WL: should this be done by UA or by ER?

Action Item: LK and GJR:
LK: ER -- it is in our charter: Section 2.1.2 item 3 quote
What features are needed for "filtering tools" used by end
users to help make sites accessible to them. unquote -- GJR,
would you work on such a document with me?

GJR: sure -- we could throw something together pretty
quickly if we simply rehash all of the ideas we've been
discussing for the last 3 or 4 years!

LK: good; we're over time right, now, but before we adjourn
I want to address one more process issue: are people agreed
that a teleconference once every 2 weeks is OK?

DD: you could move to 10am Eastern time--it's OK by me

LK: should we move to 10?  an added benefit, at least for
those in those east of the Mississippi is that there is time
to put out a morning-of-the-meeting reminder that those--at
least in the eastern US and Europe--would probably receive
before the meeting starts

DD: I just checked, and Monday at 10am might not work -- at
10am ET on Mondays, both W3C bridges are in use

GJR: this is technically still an IG call, right?  and, yet,
we talk about WG almost exclusively -- or, at least, we have
so far -- could we perhaps have more frequent WG calls and a
monthly IG call?

HB: not sure if there is any use in breaking it up like that
-- the WG and the IG portions of ER aren't that distinct

LK: an interesting idea, though -- is every 2 weeks a
workable schedule for all present?

HB, MC, DD, WL, BM, CR, GJR: yes

 LK: will put out a note on the ER-IG mailing list, asking
for feedback on timing issue

GJR: check availability of bridges before posting potential
times -- give us a choice of specific times and dates

LK: there is also the possibility of getting an outside
line, right?

DD: yes, but it is complicated and often more trouble than
it is worth -- W3C has 2 bridges at its exclusive disposal,
plus use of the MIT bridge, which we are using today; MIT
bridge not as easy to reserve -- have to go through another

HB: you should check the future availability of the MIT
bridge, since academics start soon, and there will probably
be more non-W3C calls on this line

LK: true; OK --  DD and I will coordinate offline, and I
will post to the list asking for feedback on potential dates
and times, but for now, let's say that, until a decision is
made, we will continue to meet every other week, which means
that the next 2 meetings will be on September 6 and
September 20 -- thanks everyone for attending, and watch the
list for updates!

Leonard R. Kasday, Ph.D.
Universal Design Engineer, Institute on Disabilities/UAP, and
Adjunct Professor, Electrical Engineering
Temple University

Ritter Hall Annex, Room 423, Philadelphia, PA 19122
(215) 204-2247 (voice)
(800) 750-7428 (TTY)

Received on Monday, 30 August 1999 18:25:22 UTC