W3C home > Mailing lists > Public > www-style@w3.org > March 1996

Multimodal Style Sheets

From: Raman T. V. <raman@mv.us.adobe.com>
Date: Mon, 4 Mar 1996 15:50:19 -0800
Message-Id: <199603042350.PAA03529@labrador.mv.us.adobe.com>
To: seibert@hep.physics.mcgill.ca
Cc: www-style@w3.org, raman@mv.us.adobe.com
Hi Dave,

An *excellent*  spec!

I think having a Multimodal Style Sheet specification which gets specialized
*if necessary* by unimodal stylesheets where absolutely required is a good
thing for all.

I also agree with 99% of Dave's spec.
(Though I might want more than five attributes in some cases)

Only caveat: from personal experience in  implementing AsTeR and powerful ways
of rendering rich information aurally, especially mathematics,
I'm stil a bit sceptical about  whether we can in fact avoid escaping into
unimodal styles.
The primary difficulty with audio is that once you start dealing with
information that is more structured e.g. mathematics, rendering order in the
aural domain is no longer defined by the left to right, up to down ordering of
the visual presentation.
But this problem may be better addressed in a high quality audio renderer if
the result is to gain a multimodal  stylesheet that works for both audio and
visual presentations.

But I'm fervently hoping that my fear is baseless --a world where multimodal
stylesheets reign and these multimodal stylesheets are in fact *multimodal*
rather than a primarily visual mechanism onto which audio has been tagged on
as a poor cousin would be *wonderful*.

We would still have to build in enough machinery to prevent authors from just
using the multimodal stylesheet as a pure visual stylesheet.
Good markup where the tags are semantically meaningful is a great thing for
producing good aural presentations.
Hence, I'm a big fan of markup languages --but one reason why you see garbage html on
the net is that people are typically lazy when it comes to using the
meaningfully correct tag and mainly rely on the visual appearance.
A multimodal stylesheet mechanism that is modality neutral runs the same risk,
we may end up with authors writing so called multimodal stylesheets that in
fact are very specific to a given modality and rendering engine.
(this is not a new problem --generality often results in pleasing no one--)

Given a scenario where multimodal stylesheets are in fact just visual
stylesheets, I might take the retrograde step of in fact not relying on the
supposedly multimodal stylesheet, but instead depending on a unimodal
stylesheet. But if then authors believe that they have done their bit for
multimodality by providing a multimodal stylesheet, a multimodal rendering
engine will have to guess and infer things from the stylesheet which is as bad
as guessing and inferring by looking at a specific visual layout as opposed to
a markup language.


The reason why I like Dave's initial spec is that it in fact does a good job
of treating audio as a first-class citizen alongside the visual modality.



--Raman 

-- 



Best Regards,
____________________________________________________________________________
--raman

      Adobe Systems                 Tel: 1 (415) 962 3945   (B-1 115)
      Advanced Technology Group     Fax: 1 (415) 962 6063 
      1585 Charleston Road          Email: raman@adobe.com 
      Mountain View, CA 94039 -7900  raman@cs.cornell.edu
      http://www-atg/People/Raman.html (Internal To Adobe)
      http://www.cs.cornell.edu/Info/People/raman/raman.html  (Cornell)

Disclaimer: The opinions expressed are my own and in no way should be taken
            as representative of my employer, Adobe Systems Inc.
____________________________________________________________________________
Received on Monday, 4 March 1996 18:50:23 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Monday, 27 April 2009 13:53:44 GMT