Re: [css3-speech] tables and speak-header

On Jun 15, 2011, at 6:38 AM, Daniel Weck <daniel.weck@gmail.com> wrote:

> 
> Also see this short discussion thread (December 2010):
> 
> http://lists.w3.org/Archives/Public/www-style/2010Dec/0235.html
> http://lists.w3.org/Archives/Public/www-style/2010Dec/0240.html

Yes, sorry, I missed that.

> The problem with the 'speak-header' property (as defined by the CSS 2.1 Aural Stylesheets Appendix) is that it only _partially_ addresses a broader *usability* issue. User-agents that support aural rendering of (potentially-) complex tables normally ensure that users can navigate within the non-linear structured playback stream (beyond simple play/pause), and let the user configure the navigation controls (row-first, column-first, headers-first, etc.) as well as the verbosity of the aural feedback. Such feedback may consist in audio icons, as well as speech synthesis used to render cell metadata (column/row header text, indices, vertical/horizontal cell span, etc.).
> 
> This flexibility yields a number of possible combinations, i.e. different ways a user may wish to navigate data, based on personal ability, preferences, etc. In fact, the same person may start reading a document with full verbosity, to finally end-up navigating the document with fewer structural cues (either because that person quickly trains-as-he/she-reads, or because the low complexity of the encountered table data doesn't justify the use of high verbosity, for example).
> 
> So to a great extent, the 'speak-header' property is the tip of an iceberg that represents use-cases which authors should not really be concerned with. Content authors should primarily ensure that the markup data is well-structured and semantically-rich, and may choose to insert supplementary audio cues (pre-recorded icons or generated TTS) via their speech-specific styles. I don't think that authors should dictate the user-experience in the case of tables (we recently came to the same conclusion with regards to announcing the nesting depth of list items)
> 
> So, this issue is effectively better solved at the user-agent level, and this is why I decided not to object to the historical decision not to include the 'speak-header' property in CSS3 Speech Module.
> 
> Thoughts welcome :)
> (by the way, is your aural CSS implementation available publicly?)
> 

The recent list discussion prompted my email, since logically if you're going to say something about lists, even at the level of 'the UA SHOULD' then is it consistent to be completely silent on tables?

That said, I agree that the majority of the author's responsibility falls on using the underlying markup language's semantic capabilities to produce a well structured table. And how the UA renders tables is for the UA to define in conjunction with user preferences. The role of CSS should be to allow the author to give the UA hints about presentation that it could not otherwise determine by inspecting the markup. Simple stuff such as suppressing a <caption> is obviously well covered by selectors and CSS3 Speech. And hence I suppose the beautifully constructed table example in the 2.1 spec (http://www.w3.org/TR/CSS21/aural.html#aural-tables) which illustrates a case where selectors are not enough; speak-header would be required if the AUTHOR is to control how often the headers are repeated. But your point is that really most of these kinds of presentation choices should be up to the UAs defaults and the user's preferences, NOT the author. 

The remaining question would be, what about user style sheets? Clearly a UA could build custom UI for all kinds of preferences and settings, or they could allow a user stylesheet as a way to override the defaults. But your use case above above about adjusting the verbosity in the _middle of reading a document_ reveals what a poor solution user style sheets would be.

In short, I'm convinced that CSS3 should not try to describe this.

As for my 'aural css' implementation, that would be a very generous description. What I built at the time was a very basic web browser that rendered HTML into speech and allowed keyboard navigation via a cursor. Think lynx except nowhere near as complete.  Towards the end of the project I had a week or so to spare so I layered on some basic support for CSS aural properties, but it was mostly an after thought.

Coupled with the fact it required Mac OS 9 (!) and Java 1.1 to run, I don't think it would even be worthy of curiosity at this point. Things have moved on!

Thanks,

AndyT

Received on Thursday, 16 June 2011 03:41:18 UTC