- From: Loretta Guarino Reid <lguarino@adobe.com>
- Date: Fri, 08 Feb 2002 18:04:54 -0800
- To: w3c-wai-gl@w3.org
In attendance: JW: Jason White WC: Wendy Chisolm ANW: Andi Snow-Weaver JA: Jenae Andershonis PB: Paul Bowman JM: Jo Miller LS: Lisa Seeman LGR: Loretta Guarino Reid MM: Matt May Gina CS: Cynthia Stenger GSW: Gian Sampson-Wild Action Items: WC: Send the mailing list the URL for the "basic English" word list JM: Rework Lisa's proposal for 3.3 to move definitions into success criteria, note what conditions are conjunctions and what are disjunctions, and determine which lists are meant to be exhaustive Discussion: JW: Resuming discussion of 3.2: assuming someone has decided to try to apply this to their website, how can he determine which part this checkpoint applies to. WC: The 3rd criterion seems testable WC: When we say that a unique style generated by the author or the user agent. Do we assume default styles exist for HTML, but not for XML? Should we just say that explicitly? JW: Should we put this issue off til the discussion of 4.4? If you have an SVG renderer, it will have default rendering styles. So it's not just HTML vs XML. JW: We could delete this qualification. But people might think they needed to provide style sheets for everything. WC: The user needs to be able to find out whether a default exists. This may need to be part of the success criterion. WC: The 2nd criterion is that each style needs to be sufficiently distinctive; Jason claims this is not testable. But color and font size are testable. So some aspects of this are testable, especially if you do human usability studies. JW: I wasn't thinking about machine testability. I was worrying about what it means to be sufficiently distinctive for the user to grasp the structure. Do we really understand what the characteristics need to be in order to qualify as distinctive? We can tell whether different rendering attributes are used, but that doesn't tell us whether it is adequate for a person to determine the structure. The implementor needs to have a reasonably good idea without needing to run a large number of user trials. If there is research available on this, we can move it to testable LS: You could put a lot of this evaluation in place today, but not all of it. With the right research, you could do this. So this is potentially testable. But someone needs to do the research to test it. WC: If it is potentially testable, it isn't yet. So it needs to be categorized "not testable" for now. JW: Preferably with someone taking an action to do the research CS: There is probably lots of information today about what is perceptibly different. But only to the limits of "reasonable". 95 out of 100 people would probably agree. LS: This is comparable to Bobby testing alt tags; Bobby can't decide whether the alt tag is correct. CS: The presence of the alt tag is testable. In general, quality is not machine testable. WC: I'm looking up color differences on google, and a search yields a formula for measurin color differences; maybe we can use this. The problem is knowing whether the tests are done with computer screens or real colors. ??: Could W3C conduct these experiments? WC: If someone in the working group wants to. The resources we have are the WG members. There isn't money available to fund such research. Maybe we could partner with a university? We need to network, and talk to the people who are doing this research. We don't have the expertise or resources to do the research ourselves. WC: Jason proposed either creating another checkpoint or moving part of 3.2 into 3.3. I agree with spitting it into 3.3. Looking at what Lisa wrote, she gets into some UI issues. This moves 3.3. into designing, rather than writing, clearly and simply. Can we combine 3.1 and 3.3? There is lots of overlap; everything in 3 seems like an accordion that expands and contracts. LS: Some things from my original proposal aren't in the current 3.3. proposal because of the overlap. But I'm not sure they are always being accounted for. WC: My work in 3.4 overlaps with other things in 3. It's like there is really one checkpoint - make things easier to understand - and these are the success criteria LS: We should we go back to the drawing board, put all the success criteria down, and see if we can soft them into different check points. JM: I agree, and we started to talk about it last week with the 3.2 success critria. But I have no suggestions. We can't look at them independently without looking at all of 3. GSW: People may not look at all of the checkpoints if we split them into different priorities JW: There are several issues: 1) what categories to place the different ideas under (this is the overlap problem). How to make it easier to understand and work with the concepts. 2) testability, 3) preconditions for applying each of the requirements, 4) defining as clearly as possible what the requirements are WC: I like Lisa's idea of collecting all our success criteria. We have a lot of good ones. I'm less concerned about the text of the checkpoints. Why do we have these? Are we concerned about structure or ideas? We are trying to help people chunk and flow. JM: And for what kind of content - instructiions, UI, etc WC: I've been taking quite a while to think about 3.4, trying to avoid another massive thread. When should we illustrate text? I've been looking at existing uses of illustrations. There are also places to avoid illustrations. I'm collecting lots of information and thinking about needs. Lots of other things get involved in this. Positioning and layout are important. Sizes, colors, flicker, internationalization, formats, etc. Lots of these issues go across all these checkpoints. (General agreement that what Wendy is doing is very useful) JW: What's the best way of opening this up wihout creating an enormous action item for someone. The current breakdown makes it easy to distribute the work. Let's collect lots of success criteria and see whether there is a better way of dividing things up. WC: I still have questions about the work Lisa did. One question about one idea per paragraph. How does that apply in other langauges - Hebrew, Japanese? Do notions of paragraphs and sentences apply? JW: Every language has some notion of sentence. I don't know if they all have paragraphs in their writing system. WC: Is there an internationalization issue here? JW: if we place upper limits on length as guidance, they will change from one language to another. Some languages need more words to express a particular idea in a sentence. WC: I found a list of words - "basic English". This would make it easy to test whether you are using basic language, but is there a comparable list for other languages? LS: Wendy, could you send a reference to the list? CS: I'm still uncomforable with a general regulation on simplifying language and reading levels. And expecting web content to be written at an easier readin level. LS: The current statement says to use words that are easily understood as long as it doesn't change the meaning. WC: I don't think that is testable LS: Maybe not machine testable WC: I don't think it will pass the "8 out of 10 people" agreement WC: As an example - the Wall Street Journal doesn't include a lot of illustrations. But that is just their style. WC: Kynn's piece about writing says "Make assumptions about your audience. Now take your audience a couple of steps lower." As an example - Jonathan Chetwynd will write for people who can't read at all. He may include many illustrations and few words. Another example - there is an astronomy site that included 5 levels of presentation, from beginner through instructor CS: It is great when people do that. WC: We are not asking everyone to do this. But if you want to, how do you tell whether you have succeeded? CS: I'm worried that the current checkpoint will be read to require everyone to comply. I'm worried that we accommodate Jonathan's needs, but not those of a PhD LS: Our mandate is accessibility. We must accommodate everyone or we have failed. Kynn's idea of modules let's people select what policy to apply. CS: I'm completely fine with describing the requirements, but I'm concerned that the way the guidelines read now, everything is equally weighted ??: I share your concern,and think there is a difference between providing a version where the simplest langage is used, and making your only version use simplest langauge. If multiple versions is a good solution, we should make that clear. WC: We are getting into conformance issues. We need to focus on the question: If you want to do this, how do you test whether you have succeeded? What guidance can we provide people? JW: There are serious issues in when to apply things, and how. We can avoid many of these at the moment, but they'll come back. We should list all the techniques as long as we think they are useful, testable, etc. But wouldn't want this to become part of a regulation. We will have a discussion of guidelines vs policy later. JW: Getting back to 3.3, there may be internationalization issues with some of the criterion. One issue brought up last week - in trying to make some criteria testable, we may impose unnecessary constraints or make the language used less comprehensible when applied inappropriately. LS: I will talk through the checkpoints. I left out some things because of overlap, and I tried to apply the checkpoint to itself as I wrote it. So I tried to make the language appropriate to content, put the ideas into definitions, and provide success criteria. Clear document, clear pargraph, and clear words. JM: If a web page is a document, but we are defining a clear document as one having a page map, etc., this becomes prescriptive. Which is like a success criterion. WC: The definitions are written like criteria. JM: They read like criteria. If they are proposed as critiria, we can decide if they are testable. For instance, having a page map looks testable. JW: This brings back a topic from last week - where is it appropriate to apply these ??: That's goin to be the bane of this checkpoint, and also 3.4 WC: Are you (JW) saying we can't separate what to do from when to do it? JW: I don't think they can be separated when writing success criteria. Once someone makes a statement of this kind, someone is going to think of exceptions. LS: How about saying "A document can be made clear by providing..." JW: or "can be made clearer" JM: At this level, should we take the term "document" and break it down further? JW: I'm not thinking of types of documents, but a particular technique becomes useful under certain circumstances. e.g. summary becomes useful once a document gets larger or more complex than a certain limit. If we had information about when it becomes useful to apply a technique, it becomes part of the technique LS: With this particular topic, I'm wondering if people are thinking about your average person when they say things aren't necessary. That's not what this area is about. it is trying to reach your not-average-person. In some cases, a summary might be provided by the title. Or the summary might be in a meta-tag. But I can go to very simple sites and feel confused. JW: There are different aspects of writing that one can consider: titles, division into, logically organized structure, composition into sentences and paragraphs, choice of language. It might be best to divide along those lines and try to explain, in a fairly concise way, how these factor contribute to comprehensibility or lack thereof. Instead of having success criteria, we can say that content that conforms with most of these, satisfies the checkpoint. A certain number of these characteristics need to be present. WC: We leave it to the author to decide which are appropriate to his content LS: I'm looking at the document and it needs rework. JW: 1) Jo's item - some of the definitions look like success criteria, so we might need to rework that; 2) make clearer which techniques are alternatives and which must be carried out together. JM: Almost all of these fall into success criteria, because the aim of the checkpoint is to produce clear documents. The definitions say how we think this should be achieved, getting into consensus on what we have decided constitutes clear writing. Most of these are measurable. JW: Would you take an action item to rework it? JM: if I do, something different might come out ??: Should we discuss the success criteria themselves? they recapitulate info in the definition WC: It provides an overview - a list of different ways this can be done. It is testable. LG: are these the only ways? LS: We shouldn't limit people to these ways JW: Rework it slightly so that what is in the definintions goes into the success criteria, and what are conjunctions and disjunctions, and what lists are meant to be exhaustive and what are not JM: action item - to help with that rework ??: an overlap issue - emphasizing structure through markup. 3.1 vs 3.3 JW: A possible way of dividing it (Wendy's suggestion?) is to divide along lines of content. Document vs UI vs audiovisual presentation. Add comprehension-related requirements. List of characteristics, with some way of dividing by concept or content type. We can do this once we have good succes criteria for the items covered by those 3 checkpoints. WC: Are there any experts in this area on this call? LS: I have been working with learning disabled, and have been reviewing research for this checkpoint WC: I have a list of people willing to review, once we come up with something JW: We should write up each success criteria, ignoring overlap issues, and come back and revisit this once we have a good set of success criteria WC: We'll deal with overlap later (I found items in Lisa's list relating to 3.5, 2.7, 2.1); those working on these checkpoints might coordinate some to avoid repeated work
Received on Friday, 8 February 2002 21:05:29 UTC