- From: Benjamin Hawkes-Lewis <bhawkeslewis@googlemail.com>
- Date: Sat, 7 May 2011 13:44:22 +0100
- To: Charles McCathieNevile <chaals@opera.com>
- Cc: "Gregory J. Rosmaita" <oedipus@hicom.net>, Laura Carlson <laura.lee.carlson@gmail.com>, HTML Accessibility Task Force <public-html-a11y@w3.org>
On Sat, May 7, 2011 at 12:18 PM, Charles McCathieNevile <chaals@opera.com> wrote: >> The hidden metadata objection simply says that encouraging authors to hide >> such text alternatives makes text alternatives more likely to be >> poor quality. > > I accept that assertion as true. I also accept, as the chairs found, that it > isn't very important in determining whether it should be possible to use > hidden metadata... Disagree with this in general, but accept there might be *more* important factors in any specific case such as @longdesc. > But the web is used a lot by organisations and individuals who do put > a high value on their content, manage it with a significant > investment, and for whom the "dark metadata" argument is about as > relevant as saving bottle-tops to recycle the metal as a source of > income is to people like Tantek and Lachlan. Heavily maintained visible data beats heavily maintained hidden metadata. I feel situations where the difference in error rate between is small enough to be irrelevant are rarer than you do and that those situations are likely cases where people would be open to making the data visible. > It is also a well-understood principle of usability and accessibility that > adding more content to a page can easily *decrease* its accessibility for a > large number of users. There's a huge difference between: 1. Putting lots of information on the page. 2. Presenting the same information in multiple forms, or at least presenting visible links to the same information in other forms. I find it hard to imagine the second case decreasing the accessibility or usability of a page that does not already have too much information on it. Does anyone have any case studies they think demonstrate this? > User agents can (trivially easily - it takes much less time than has been > spent on this current thread) take the most common form of bad data - a > description instead of a URL in the attribute value, and make it available > to the user. Based on http://blog.whatwg.org/the-longdesc-lottery, I don't think that's an especially common form of bad data. Also, I dispute what you suggest is trivially easy. First, how do they determine longdesc is text but not a URL? Bear in mind: http://dev.w3.org/html5/spec/urls.html If they require URLs to be parseable, then longdesc="Sales grew by 500 percent" won't get caught as a description. If they require URLs to be valid, then longdesc="http://example.com/longdescs/sales october.html" won't get caught as a URL. Second, once they've determined it's not a URL, do they try to exclude cases like the following? alt="Sales grew by 500 percent" longdesc="Sales grew by 500 percent" alt="Sales grew by 500 percent" longdesc="htp://example.com/longdesc-sales-october.html" Third, how might they make it available? If we can come up with a good behavior here, we could include it in the proposed spec text. >>> i, for one, think there is a crucial distinction between "hidden >>> metadata" and "discoverable metadata" >> >> What's the distinction? > > Actually, hidden is not a binary state, it depends on usage. This subtlety > seems to have been lost to the "anti-longdescers", and is the principle > which Gregory is trying to elucidate. Heh. My feeling about the discussion is the inverse: * Visibility and hiddenness are binary states. * Discoverability is a sliding scale, with visibility at the high end. * Discoverable data is a superset that includes visible data and hidden metadata. > I will try to propose a new text for the section tomorrow which clarifies > these arguments. Looking forward to it. :) -- Benjamin Hawkes-Lewis
Received on Saturday, 7 May 2011 12:44:51 UTC