- From: Charles McCathieNevile <charles@w3.org>
- Date: Tue, 20 Jun 2000 15:45:06 -0400 (EDT)
- To: WAI AU Guidelines <w3c-wai-au@w3.org>
Thanks Jan. These will go up shortly Charles On Tue, 20 Jun 2000, Jan Richards wrote: Jutta Treviranus Jan Richards Dick Brown Fred Heather Swayne: GR CMN Colin Birge JT: First business is to go through techniques to see if there are inconsistencies with the definition. JT: Did anyone do them? Most: No (some misunderstandings since minutes of last meeting not up) JT: Did this. No glaring problems. But would like others to review. Largely the appendix that JR did. ACTION ITEM (ALL): Review techniques against prompt definition. *CMN joins* JT: Updates CMN. Asks CMN about access to the amendment doc space. JT: Let’s talk about conformance evals. GR: Conformance evals. Difficult for guideline 7. JT: for ATRC course tools study access was more objectively tested. Dedicated workstations equipped with representative assistive technologies. What do others do. HS: MS trying to refine process. Can be dependent on who tests and how. CMN: Consistency process is to do the same things across products. JT: We listed a set of tasks and tested across a set of technologies. Needed a balance between being to prescriptive and to open. When too prescriptive it was hard to get a good idea of access. Better to focus on tasks. HS: More scenario based. CMN: did partial review of dream weaver. Tried to write down how things were tested. Would like to post how this was done. CMN, JT: short discussion on Mac mouse keys JT: Asks about MS logo program HS: Methods are proprietary to outside testing companies. CMN: We need to develop a matrix type method. For each checkpoint there are a bunch of things to test for for different kinds of tools. JT: Do we want to do pass or fail or scoring? CMN: Scoring is A, AA, AAA JT: Other stuff? CMN: His conformance database tool will allow partial tests to suit individual needs. People will build their own scoring mechanisms. HS: Scoring will add validity to "why" product didn’t pass *GR rejoins* JT: ATRC used scoring such as how many steps to alt-text. But still included comparative tables. GR: people ask him how the guidelines will help them make judgements. Must be more than A,AA CMN: database approach will allow simple A, AA as well as custom queries GR: it is also helpful to put in tips for users CMN: that is just writing help docs GR: it is just providing work arounds CMN/GR: Back and forth… GR: Disability community wants results. Very bad to ignore work arounds. DB: What are we arguing about? JT: Ratings… JR: Work arounds JR: Can we agree to general ratings as well as specific details CMN: JT: More granularity – ex. does something easily or with more steps GR: Granularity has to include how it satisfies relative checkpoints etc. A, AA is meaningless out of context. DB: Concerned whether WG should be doing this at this level of granularity. GR: as evidenced by how many we have been completed. JT: Task of WG will not be to pile up lots of evals. But should we come up with process. GR, CMN: Agree JT: We need to develop objective tests. Many steps for relative priorities. GR: My method is not yet ready. Used boilerplate text… CMN: Mailing list should be the feedback mechanism. JT: When will GR’s work be ready? GR: Still needs work. JT: Do you need volunteers? GR: Give me a week. JT: Action item report for the techniques and evaluation database. CMN: Not yet JT: Reason? CMN: Time. Hope to work on it this week. Techniques database is a long term project. JT: We have a huge task ahead of us. We are making little progress. Ideas? GR: Sense of urgency has dissipitated. We need to get moving again. First we need to reping all present and past AU members. JT: OK. We are re-chartering. Maybe we need new staff. Will talk to CG group. CMN: Spent 40 hours on dream weaver. Takes a long time to learn new products to the proper extent. GR: AFB has resources for testing. Maybe we can get these resources for evaluations. These people are professional testers. JT: Should we pursue other testers? DB: Then WG is still undertaking large effort. Concerned about doing everything we talk about. JT: Agree that our main task should be to great a process. Should we make pieces that can be funded, staffed externally. GR: Talked to someone at AFB about blind low vision evaluation of five main market tools etc. CMN: Balance betwen collecting evals and support and setting up a software testing service. GR: Same problem holding up WAI review process. CMN: Hoping QA person would sart sooner. JT: Should I go to CG with idea of separate externally funded project. CMN: Still concerned. But we should talk to the CG about it. It is WG work. But we have limited resources. GR: Should use pre-existing expert resources. CMN: Need vendor neutrality. Action Item: JT will ask CG what they think CMN: Long range question. How does documententation apply to accessibility of the tool itself? Does it fit in 6 or 7? GR: Both JR: 7 only ACTION ITEMS 1. ALL check techniques against new prompt definition 2. GR should put up method 3. HS Will look into logo program. 4. JT will talk to CG about evaluations and method -- Charles McCathieNevile mailto:charles@w3.org phone: +61 (0) 409 134 136 W3C Web Accessibility Initiative http://www.w3.org/WAI Location: I-cubed, 110 Victoria Street, Carlton VIC 3053 Postal: GPO Box 2476V, Melbourne 3001, Australia
Received on Tuesday, 20 June 2000 15:45:06 UTC