W3C home > Mailing lists > Public > www-svg@w3.org > November 2004

xslt was: SVGAccessibilityWG: has-been-clicked or a:visited

From: Jonathan Chetwynd <j.chetwynd@btinternet.com>
Date: Thu, 11 Nov 2004 09:46:54 +0000
Message-Id: <9EB378C6-33C6-11D9-B7C9-000A95C7D298@btinternet.com>
Cc: r.muetzelfeldt@ed.ac.uk, "SVG (www) list" <www-svg@w3.org>
To: Philippe Lhoste <PhiLho@GMX.net>

regarding arbitrary xslt tranformations.

One cannot expect too much from this, at this time, the theory is fine, 
but like babelfish, the results might not please everyone.

for example, consider a graphical user interface, currently there isn't 
a standard way to describe one in SVG, so if one limits oneself to <g> 
groups it will be difficult to analyse and render alternately. however 
using a schema such as http://www.peepo.co.uk/temp/gui-schema# as used 
here: http://www.peepo.co.uk/launch/index.svg might make the prospect 
of such an accessible transformation marginally more possible.

It is important to bear in mind that the user should ultimately have 
control over the xslt or sXBL files and this again requires excellent 
authoring (or search) tools.


Jonathan Chetwynd
http://www.peepo.co.uk     "It's easy to use"
On 11 Nov 2004, at 06:38, Philippe Lhoste wrote:

r.muetzelfeldt@ed.ac.uk wrote:
> Excuse me for writing to you offline, but I've not really been 
> following the accessibility thread, and don't want to make an idiot of 
> myself by making a well-worn contribution.

Well, I feel that what you say here haven't been said for quite a 
while, and it is of general interest, so I take the liberty to put it 
back to the list. I hope you won't mind.

> But something you said in your recent posting makes me think that the 
> accessibility issue is orthogonal to that of the design of the SVG 
> spec itself:
>> Ultimately, an accessible viewer would read the SVG code... That's the
>> advantage of SVG over bitmap images, they have semantics.
> Since SVG is 'just' XML, isn't all that is needed is for an 
> "accessible viewer" (presumably you mean a viewer that supports 
> accssibility) to support arbitrary XSLT transforms - published 
> anywhere on the web, held in the viewer, or held locally - which can 
> then support whatever form of accessibility is required, according to 
> the needs of the user?  This could of course include voice etc.   
> There is then no need to think about accessibility at all in the SVG 
> spec - and the  designers of the spec don't have to engage in an 
> (ultimately hopeless) task of anticipating every form of disability.  
> Or am I being particularly na´ve?

It is an interesting question.

Basically, I feel that indeed, XSLT (or any other technique to 
transform XML to something else) is an interesting accessibility tool, 
as you said. It would allow to transform SVG to HTML or even plain 
text, ready to be read aloud, for example.

Indeed, XSLT is quite complex to master, so it is harder to use than, 
say, user stylesheets. But I suppose that, as you suggest, authors can 
make generic XSLT scritps than users can apply as needed.

Where I don't follow you, is that with such a tool, the SVG spec would 
may drop all accessibility concerns.

Indeed, they can't address all disabilities, and there are tools to 
handle them. But they can ease the task of these tools, and in all 
cases, some quite common disabilities, which have been already 
addressed for HTML (for example), can be addressed in SVG.

Access key, hints/descs/titles, visual highlighting, to name but a few, 
are easy to implement, and are even useful to "regular" users, as 
shortcuts or navigation help, for example.

Note that metadata is useful only if authors take care of providing 
it... It is a bit like code comment, too much coders, even with goal to 
share code as open source, don't care about commenting (or even 
indenting!), with sometime the poor excuse than code is 

So if coding SVG with accessibility (or even readability, but metadata 
is currently ignored by viewers, and you have to make an extra step to 
make it available to the users) is a concern, author must think to 
provide additional clues to add sematics to groups of shapes.

For example, a rectangle and a text could be grouped to indicate a 
label, a line and perhaps a nearby text could be grouped and described 
to indicate it joins two given labels with a given kind of 

Thus, what could be perceived as a bunch of disjoined graphical parts, 
become a signifiant whole, that can be processed by a XSLT script, for 
example, and read aloud or described otherwise.

Well, being a visual guy, I would have little use of such description, 
but I guess some people is able to mentally build the model, a bit like 
a chess player visualizing the chessboard when reading a party report.

Indeed, this part is a bit annoying to do, even more when one can think 
that the drawing is self-explicit... Note that some authoring tools can 
enforce such policy, either by asking the user for each element drawn 
(with the risk of having "***" as answer), or by filling automatically 
such data, eg. if the tool is aware that a given line is a connector 
between two elements (labels, integrated circuit, etc).

> I actually face a similar issue, but if you like at one level up.   
> I'm designing a visual modleling environment using SVG (the user draws 
> a diagram to define the structure of a dynamic simulation model).  
> Different users want different views of the model.  Rather than trying 
> to anticipate these, I'm simply providing a mechanism for specifying 
> any arbitrary XSLT transform: some can produce HTML, some a differnt 
> form of diagram, some VoiceXML.  Thus, the whole community can address 
> the variety of requirements, rather than me by myself.

Interesting. If you have an URL to share, I would like to take a look.

Philippe Lhoste
--  (near) Paris -- France
--  Professional programmer and amateur artist
--  http://Phi.Lho.free.fr
--  --  --  --  --  --  --  --  --  --  --  --
Received on Thursday, 11 November 2004 09:47:28 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 8 March 2017 09:47:01 UTC