Device independence and speech/audio (was Re: XSL-FO's )

On 9/5/02 3:44 PM, "Ian Tindale" <ian_tindale@yahoo.co.uk> wrote:

> 
> 
>> -----Original Message-----
>> From: www-style-request@w3.org [mailto:www-style-request@w3.org] On Behalf
>> Of Hċkon Wium Lie
>> Sent: 05 September 2002 22:59
>> 
>>> If you need to know, you're looking in the wrong place - you've
>>> gone too far, turn back. Return to the source.
>> 
>> I'm a speech browser on the web. I've been sent an XSL-FO "document".
>> How do I return to the source?
> 
> Oops, sorry, you've received the wrong information. Can't think how that might
> have happened, unless it has something to do with the extremely illogical
> carry-over of aural properties into XSL-FO from CSS2, which was pretty bloody
> daft if you ask me. :)

Quite daft given that neither spec went through a proper CR period.  I still
consider both of those to be in an extended (indefinite?) CR period.  The
CSS WG is taking steps to "solidify" CSS2 with CSS2.1[1] which _will_ go
through a proper CR period. (Note that even today most W3C specs which reach
PR/REC never go through a proper CR period per the intent of CR - sad but
true.)

> I'd have thought it would be more logical to sort out what kind of UA you are
> - HTML browser, XML 'browser', television, WAP device, speaky thing, Braille
> terminal, synthesizer, teapot etc, and send you precisely the kind of stuff
> you'll be happy with.

This is a huge misconception that has propagated quite well unfortunately.

There are some very simple scenarios that demonstrate the flaws in this kind
of thinking, e.g.


1. I buy something over the web on my PDA. With the resultant receipt
"page", I point my PDA at an IR/bluetooth printer and print the receipt.

Consequences: 
 a. the content sent cannot be tailored specifically for a PDA, since it
must be able to be printed as well.
 b. the printer cannot be expected to re-retrieve the content.
   1). the printer may not have access to the network or the site
   2). receipt pages are not typically re-requestable.


2. I download some web pages to my multi-modal laptop and view them briefly.
Later, on my drive home, I listen to the web pages using the speech
synthesis capabilities of my laptop which is now connected to my car stereo.


Conclusion: Even in a purely single user scenario, content will jump from
device to device without any further network interaction.  The "server"
cannot predict the wide variety of devices that the user will move their
content to, or the wide variety of media that the user will view their
content on.

Related: content providers are frequently much more likely to provide
different versions of their content for different _natural_ languages
(i18n), rather than different versions for different devices.  Given limited
budgets for producing n-versions of content, and the fact that it is
possible to write device independent content, but not natural language
independent content (with the exception of MathML perhaps), this makes
sense.

These and other topics were well discussed at the W3C Device Independence
Workshop this past March[2]. (it was later "renamed" to the "W3C Delivery
Context Workshop", but the true name can be seen in the URL which ends with
"DIWS".)

Most recently, Media Queries[3] helps address some of the challenges
illustrated by the scenarios by allowing authors to write adaptable style
sheets that can provide different styling for different media, devices etc.,
on the client side, rather than depending on server/proxy magic which is
doomed to fail.

> Aural properties to my mind belong in a stylesheet model
> of their own, rather than being tucked away in the corner of a visual spec,
> where they've more chance of being ignored than used. Thus, if you're a speech
> browser, you'll be sent a differently transformed set of objects, and
> hopefully a different 'style' sheet also. Neither of which would be applicable
> to a visual device, but that eventuality would never happen would it.
> 
> Rather than letting accessibility in a little bit, like giving a concession
> 'oh, here, have this dusty corner of the style sheet spec', why not have
> entire style sheet modes for different sensory environments, and deliver
> appropriately?

Completely agreed, as does the CSS working group, and as such, we have moved
the aural style sheet properties to an informative chapter of CSS2.1 (since
ACSS was never interoperably implemented anyway), and instead are developing
proper speech and audio support in their own CSS3 modules: CSS3 Speech, and
CSS3 Audio.  The CSS road-map[4] should be updated soon to reflect this.

Thanks,

Tantek

[1] http://w3.org/TR/CSS21

[2] http://www.w3.org/2002/02/DIWS/

[3] http://www.w3.org/TR/css3-mediaqueries/

[4] http://www.w3.org/Style/CSS/current-work.html#table

Received on Thursday, 5 September 2002 20:55:58 UTC