- From: Catherine Laws <claws@us.ibm.com>
- Date: Thu, 12 Aug 2004 15:52:06 -0500
- To: w3c-wai-ua@w3.org
UAAG teleconference meeting minutes on August 12, 2004, 2 PM EDT. UAAG members present: Jon Gunderson Matt Mays Cathy Laws Colin Koteles WCAG visitors: Chris Ridpath - from Univeristy of Toronto. Works on deployment process for WCAG 2.0 Jenae Andershonis - tester for MSN.com from Microsoft. Works on tests for WCAG 2.0. jg: How to use the discussion forum for IE 7 as an opportunity to discuss UAAG? Live chats with live developers once a month. A way to draw attention to accessibility issues. They pay attention to specific requests. We need to coordinate efforts in participation in live requests and putting accessibliity requests in blogs. mm: Just one way IE team is using to evangelize. Now they are talking about things that have been requested for a long time. They are getting hammered about standards support. ja: First I've heard of IE blog: jg: Been around about a month or two. Way to bring issues to IE team. Especially now that we have test suites we can point them to the suites from the blog or forum. Jon had everyone introduce themselves. ja: Wendy Chisholm was hoping we could all work together for common test suites. We would like to add to what UAAG has or a common set of test suites we could all use. cr: For example, ask if the web page has alt text for images. We're looking at HTML but want to extend to SVG, etc jg: Lots of tools out there that will check for compliance with WCAG, but requires human intervention and interpretation. cr: Interpretation has been a weakness. Everybody has a different interpretation. Need a WAI interpretation and what needs to be checked for. jg: Give an example of how that would work. Alt text and checks for frames are pretty straightforward. cr: Still missing WAI interpretation of how to put in an alt text and where. Start with simple, move on to complex. jg: Envison tools discriminating between different types of images? Like simple versus a chart? cr: Detailed as possible. A chart might be an example of where longdesc is needed. Still up to author whether longdesc needed. jg: Author could make decision to put description inline. Then longdesc not needed. cr: Yes, if description not in document then need longdesc. Not automated checks. Will require human intervention. Like in UAAG test suites, are all of these machine testable? jg: All require human intervention. No automatic testing. Evaluators use test suites, make config settings, run test, check expected results, assign a rating - complete, almost complete, partial, not rated, not implemented, not applicable. You could have same type of system, but you need to provide a view of a relevant object in that document, then use framework to decide if they met requirement. cr: For example, look at image and see if it is flickering or not? jg: Similar to what I wanted students to work on. Take testers through and ask questions and teach them what accessibility is. Instead of using an image, use CSS. Pushing them to standards to use. cr: Images and stylized text allowed? Not sure, need rules to make it clear for everybody jg: Everybody has their ideas about best way to do it. cr: Need WAI interpretation so no question about whether they really conform to guidelines. Seen test suites. Not aware of tools that go with it. jg: What URL have you used? We've gone from XML static test suite to DB test system. cr: Like having it more open. Just saw static HTML page. jg: Too many dependencies with static HTML. This way when we add tests and we do evaluations and generate reports, everything gets updated. Don't have to know XML - just do it through forms. Password protected. Only people committed to work in the group can add/change. Eventually dynamically able to generate reports and test suites. Go to UA home page and click under current test suite work. jg: Colin is working on updating these test suites and reports. Looking at what's been implemented by different browsers. Add SMIL and multimedia test suites next that support accessibility features. cr: Is this on WAI yet? jg: No. Limited resources to get this hosted on WAI. mm: CPI and PHP on WAI, but too many hoops to get DB set up on WAI. cr: Can you run Java servlets on WAI server? Tomcat or something? jg/mm: Not sure? ck: Our end goal is to do evaluations with different user agents and how they conform to UAAG. What is the goal for WCAG test suites? It may be different. cr: End goal is see if their web page complies with WCAG. jg: Will send URL to list to see actual forms we are using. ck: Will evaluation be partially automatic, partially human? cr: Mostly human. jg: People can put in a specification. WCAG 2.0 has 3 tiered model? Still guidelines checkpoint system? cr: Principles, guidelines, success criteria. mm: Specific checkpoints for a given technology? jg: Want to develop test suites for different technologies? cr: Just HTML for now. jg: We associate tests with each requirement. HTML for now, SMIL, SVG later. Evaluations like Mozilla, go to a checkpoint, see if any tests for it, if rated it tells you the summary of the different tests for that browser. cr: What is process when you make up tests for validating tests? jg: We talk about tests but we haven't necessarily validated the tests. Some comments have been submitted but not many people have yet. Earlier it was easy for user agent developers to ignore the tests and just say yes or no whether they complied with the checkpoint. We're trying to make it more of a reporting process. cl: When I ran the tests with HPR I made some comments about the tests, but unless user agents actively use the tests, you won't get comments. jg: Hard to get people to work on test suites. Part of WCAG is trying to get process for developing test suites? ja: QA working group - test suites are more of a handbook or recommendation, not a requirement. But so much easier if you have them. jg: We're hoping test suites will bring the UAAG more alive. cr: We're trying to make WCAG 2.0 more testable. jg: Go into test suites for UAAG test suite, we supply way to point into UAAG and into HTML to say which requirement we're testing. Provides an area to put in code you want to test. Code generates source code used in test. Results area - what should happen for you to pass test. Another box for a rating and a comment. If partial implementation, they have not passed all tests. We don't have automation that checks. cr: Tools that do checking are beyond our scope. We just want to make sure tests are there. Format for displaying tests, is there a particular format for that? W3C standard or just made up? jg: Just seems to work. Just simple and evolved. Other test suites I have seen were too cryptic. cr: We want out test suites to be usable. jaj: One person doing testing on their own could use this very quickly. jg: What would you do with a checker warning that asked if you used markup correctly? <Cathy Laws and Colin Koteles had to leave at 2:55 PM EDT.> Cathy Laws IBM Accessibility Center, WW Strategic Platform Enablement 11501 Burnet Road, Bldg 904 Office 5F017, Austin, Texas 78758 Phone: (512) 838-4595, FAX: (512) 838-9367, E-mail: claws@us.ibm.com, Web: http://www.ibm.com/able Whatever you do, work at it with all your heart.
Received on Thursday, 12 August 2004 20:52:41 UTC