Re: Keyboard Navigation For Document Exploration In SVG 1.2

Yup this is interesting. Although I am more prone to discuss this on the
phone then by email.
My 2 cents:


There is arguably no such thing a person having true knowledge. All we ever
have is models for knowledge.  Whether that knowledge is how we perceive
things in sight (interpretation of radiation),  sound (sound waves),  touch,
whether we  count in binary, decimal etc. - these are all just models.

 Let me call a  good model is one that can easily, and via automated
process, transfers it's knowledge into a different model.  Then it is
accessible knowledge. (And that to me is good :)

SVG,  with natural lang snipits,  are a knowledge model. So are bitmaps. But
as you can't easily derive the knowledge in a bitmap without eyes and the
right array of cognitive process behind them I can't call it good (for the
scope of this email) .

Semantics in mark up, and even more -, RDF,  are designed (if we do our job
right) to capture the model as a good model - so that it can be interpreted
and content adapted as the model changes, which is what you need to do when
changing for different mode of use by different users.
That , to me, make a model more useful  for the aims of accessibility.

In terms of getting people to use the semantics -there are two ways. One is
to make the language require the semantics. Take for example an Xform. The
way to semantically encode the knowledge behind an xform is basically to
create an XFORM in the first place. The mark up is designed to expose the
knowledge. (Because TV was part of that process from the ground up?).
In these terms you could say PF role is to make the model good in as many
protocols as possible?
(What i try to do is extract knowledge in other markup and encode it ion
RDF -from that form I can do anything with it. )

The second way to go is to reward the author for good use of semantics -
with device independence, ease of translation, all these things that give an
ROI (return of investment) and a win for accessibility at the same time.

Is it too late to run for president?

Lisa


----- Original Message ----- 
From: "Will Pearson" <will-pearson@tiscali.co.uk>
To: "Lisa Seeman" <lisa@ubaccess.com>
Cc: "david poehlman" <david.poehlman@handsontechnologeyes.com>;
<wai-xtech@w3.org>
Sent: Sunday, November 28, 2004 4:41 PM
Subject: Re: Keyboard Navigation For Document Exploration In SVG 1.2


>
> Hi Lisa;
>
> I pretty much agree with what you have to say, as after all it's the
> semantic meaning we seek to convey when communicating, and the words and
> other symbols we use are just mechanisms to physically encode that
meaning.
> This leads on to a rather academic point, which is can we ever separate
the
> semantics of a communication from a physical form.  As semantics
themselves
> have no physical representation and rely on being encoded into words,
> pictures, etc. to be communicated, can we ever convey pure semantics
through
> a communications channel?  I suspect not, as from a systems architecture
> point of view, you still need to hold that semantic meaning in some form
of
> variable, be it an integer, character, string array, or whatever.  You can
> translate from one form to another provided you know the initial encoding
> scheme in order to extract the semantics from the form used to store it,
but
> I don't know if you could ever store pure semantics.  It's an interesting
> question, especially as trying to perform semantic conversions from one
form
> to another is something I advocate as an HCI researcher, and I've even got
a
> book chapter on the subject coming out next year *smile*.
>
> Another interesting question is who or what should be responsible for
> extracting the semantics of a communication.  If you want accurate
> communication without the need for the receiver of the communication to
> learn anything, then unarguably it's the author.  However, do authors
> currently have the motivation to provide accurate semantics for their
> documents?  There's no single answer for this, as it depends on the
author's
> situation, but in quite a lot of cases I would imagine they don't, hence
the
> lack of meaningful alt attributes found in current HTML or HTML + CSS
> implementations.  Automatic conversion tools, of the sort I'm trying to do
> for SVG, are fine, if they exist *smile*.  Finally, there's the receiver
of
> the communication, who can learn an ontology in order to extract the
> semantics from their encoded physical form.  I think all three have a
place,
> and I'm not arguing against any, nor am I trying to promote one above the
> others.  However, I believe, that due to circumstance, different
extracters
> are more appropriate than the others in some situations.
>
> Any way, it's a useful discussion, and one that will help to enhance
global
> knowledge, which in turn can go into making a better WWW for all.
>
> Will
> ----- Original Message ----- 
> From: "Lisa Seeman" <lisa@ubaccess.com>
> To: "Will Pearson" <will-pearson@tiscali.co.uk>; "david poehlman"
> <david.poehlman@handsontechnologeyes.com>; <wai-xtech@w3.org>
> Cc: <oedipus@hicom.net>
> Sent: Wednesday, November 24, 2004 1:13 PM
> Subject: Re: Keyboard Navigation For Document Exploration In SVG 1.2
>
>
> >
> > > My main point, is do we need to include the semantics as actual
mark-up.
> > > For example, do we need to associate 'title' and 'description' with
> > > everything?  To take an HTML example.  You can tell that a heading is
a
> > > heading due to the use of <H1> or another heading tag.  However, you
> could
> > > use font attributes or CSS to achieve the same visual effect, and
still
> > > visually denote it as a heading.  So, people can extract meaning based
> on
> > > attributes such as size, position, color, etc. and not on any mark-up.
> > The
> > > same goes for images, where people extract meaning based on image
> > > attributes.
> > I think that would be a WCAG violation. You are meant to use header tags
> to
> > represent headers.
> >
> > But (I think)  you are saying it should not be a violation.
> >
> > Some systems can analyze fonts etc but they are trying to put back what
> > should have been their in the first place.
> >
> > At some point a human needs to check the associations. For example is
pink
> > always imply Feminine interest, often but not always.
> > Removing ambiguities in language is hard enough, in format and
> associations
> > of them is much harder.
> >
> > In SWAP we guise roles and association, but the author then can edit
that
> > knowledge. Knowledge is captured in RDF which is , at the end of the
day,
> > just markup.
> >
> > With fuzzy guesses, I want the  author to confirm how accurate the guess
> is,
> > not the user.  So these tools are great to help the author add semantics
> > fast, but , when done only at the user end, can end up with
> misinformation.
> > And that is the last thing we want.
> >
> > A working case of this happening is automatic tools that remove
> ambiguities
> > in Hebrew by adding diatric marks.
> >
> > Some of them are quite good, and guess right 95% of words
> >
> > But when they guess wrong, the user is given a correct sentence, that is
> > just absolutely not what the author was saying.. Sometimes it can even
be
> > the opposite.
> >
> > That's why i would like the  author to use the tool, see when the
mistakes
> > occur and then correct the diatric marks that the tool put in.
> >
> > My 2 cents
> > lisa
> >
> > .
> > > What I'm trying to illustrate, is that we don't necessarily need to
> > include
> > > textual equivalents such as 'title' or 'description' to make something
> > > accessible.  Providing we can convey the attributes that form the
> meaning,
> > > in an accessible form, then you've got yourself accessible content.
> This
> > > idea that everything needs textual equivalents really doesn't work in
> > > practice.  Most content authors just put in meaningless information,
if
> > they
> > > put it in at all, which is why graphics still remain a problem in
> > HTML+CSS.
> > >
> > > Ultimately, I'm just trying to suggest an unconventional way to make
it
> > > accessible, which revolves around the user, and what the user can and
> > cannot
> > > do, rather than relying on content authors to do something.
> > >
> > > Will
> > > ----- Original Message ----- 
> > > From: "david poehlman" <david.poehlman@handsontechnologeyes.com>
> > > To: "Will Pearson" <will-pearson@tiscali.co.uk>; "Lisa Seeman"
> > > <lisa@ubaccess.com>; <wai-xtech@w3.org>
> > > Cc: <oedipus@hicom.net>
> > > Sent: Wednesday, November 24, 2004 11:59 AM
> > > Subject: Re: Keyboard Navigation For Document Exploration In SVG 1.2
> > >
> > >
> > > > Will and all,
> > > >
> > > > I'm not sure what you are saying.  If there are no semantics, we get
> no
> > > > information about the kinds of things gregory was asking for.  If
> there
> > > are
> > > > semantics and we zoom to a particular station on the map, the
> semantics
> > if
> > > > they are rich enough provide us with all the info we need about that
> > > station
> > > > and we can even zoom in further say to platform a and read the signs
> on
> > > it.
> > > >
> > > > Johnnie Apple Seed
> > > >
> > > > ----- Original Message ----- 
> > > > From: "Will Pearson" <will-pearson@tiscali.co.uk>
> > > > To: "david poehlman" <david.poehlman@handsontechnologeyes.com>;
"Lisa
> > > > Seeman" <lisa@ubaccess.com>; <wai-xtech@w3.org>
> > > > Cc: <oedipus@hicom.net>
> > > > Sent: Wednesday, November 24, 2004 6:42 AM
> > > > Subject: Re: Keyboard Navigation For Document Exploration In SVG 1.2
> > > >
> > > >
> > > >
> > > > Yes I agree granularity would be useful, but it depends on what
you're
> > > > navigating to.  If you're navigating between container elements,
such
> as
> > > > groups and symbols, then you have the distinction between granular
> > levels
> > > > provided by these groupings, if you're just navigating between
graphic
> > > > elements, such as <LINE>, <RECT>, <CIRCLE>, etc, then there's no
> > > syntactical
> > > > groupings.  There may be visual groupings, and these will be exposed
> > > through
> > > > revealing the spatial relationships via spatial navigation.
> > > >
> > > > As for semantics, well, are they really necessary?  According to
> > > psychology,
> > > > meaning is something we associate with stimuli.  We receive stimuli,
> > such
> > > as
> > > > sound, lightwaves, etc., and then group it into groups based on
> > perceptual
> > > > psychology rules, such as the Gestalt laws of perception.  The final
> > stage
> > > > is to associate meaning with this stimuli, based on what we've been
> > > > conditioned to believe the perceived stimuli represents.  So, I
> believe,
> > > > that if we can communicate the stimuli in another, non visual, form,
> > then
> > > > the user can learn the meaning associated with it, just as sighted
> > people
> > > > associate meaning with visual stimuli.
> > > >
> > > > Having said that, I wouldn't stand in the way of more semantic
> > > information.
> > > > As we're using mainly sequential output media, such as Speech and
> > Braille,
> > > > it will probably be a slow process to communicate all the attributes
> of
> > > the
> > > > stimuli to the user.  There's two ways to sort this, either the AT
> > vendors
> > > > look into multiple methods of encoding meaning within the output
> > channel,
> > > or
> > > > we reduce the amount of information being conveyed.  This reduction
in
> > > > information is where semantics would be useful, as it would reduce
the
> > > > amount of information conveyed to just the meaning, and would also
> > reduce
> > > > the amount of cognitive activity required by the user, as they would
> no
> > > > longer be required to perform the association between stimuli and
> > meaning.
> > > >
> > > > So, semantic information isn't required in the mark-up in order for
> > > someone
> > > > to access the semantic meaning behind images, but would improve the
> > > > usability of images.
> > > >
> > > > Will
> > > > ----- Original Message ----- 
> > > > From: "david poehlman" <david.poehlman@handsontechnologeyes.com>
> > > > To: "Lisa Seeman" <lisa@ubaccess.com>; <wai-xtech@w3.org>; "Will
> > Pearson"
> > > > <will-pearson@tiscali.co.uk>
> > > > Cc: <oedipus@hicom.net>
> > > > Sent: Wednesday, November 24, 2004 9:31 AM
> > > > Subject: Re: Keyboard Navigation For Document Exploration In SVG 1.2
> > > >
> > > >
> > > > > Lisa,
> > > > >
> > > > > After thinking about this, I came to the conclusion yesterday that
> the
> > > > > ability to change granularity if supportable would be something
that
> > > would
> > > > > be needed and I agree that this might get us the finer details
> > although
> > > I
> > > > > hadn't thought of it in such a concrete fashion but it is also
> > important
> > > > to
> > > > > retain the spatial relationships within the image so we need to be
> > able
> > > to
> > > > > move in multiple and varying directions as well as gather fine
> details
> > > but
> > > > > as you say, it's not supported to that level of semantic
> information.
> > > > >
> > > > > Johnnie Apple Seed
> > > > >
> > > > > ----- Original Message ----- 
> > > > > From: "Lisa Seeman" <lisa@ubaccess.com>
> > > > > To: "david poehlman" <david.poehlman@handsontechnologeyes.com>;
> > > > > <wai-xtech@w3.org>; "Will Pearson" <will-pearson@tiscali.co.uk>
> > > > > Cc: <oedipus@hicom.net>
> > > > > Sent: Wednesday, November 24, 2004 3:25 AM
> > > > > Subject: Re: Keyboard Navigation For Document Exploration In SVG
1.2
> > > > >
> > > > >
> > > > >
> > > > > I spoke to Gregory briefly last night. I think the main point for
> our
> > > chat
> > > > > (other then just being good to chat to him)  was what is needed is
> an
> > > > > ability to switch granularity. In other words, to zoom in in the
> > details
> > > > and
> > > > > then take a step back,(whilst staying were you are) and look
around,
> > and
> > > > > then see detail.
> > > > >
> > > > > Take for example an SVG subway map. You want to go to station X,
so
> > look
> > > > at
> > > > > station x for details, is it accessible? Does it have an
accessible
> > > > > bathroom. If the answer is no, then I would want to switch
> > > granularities,
> > > > > and be able to navigate around the different stations. When i get
to
> a
> > > > > station I know is close, then i would want to zoom in and get
> > > information.
> > > > >
> > > > > So the proposal would be like ctr arrow up would switch granulates
> up.
> > > > >
> > > > > But there is little point to that because there is not the
> supporting
> > > the
> > > > > language to support the concepts behind it.
> > > > >
> > > > > Basically it comes down to the lack of semantic information , and
> the
> > > need
> > > > > for identification and integrity of blocks of content and to know
> what
> > > > they
> > > > > intend to be, and what state they have, and relationships with
other
> > > > > content.
> > > > >
> > > > > It was very similar for the need of content and concept zoom that
i
> > > > > suggested for Math ml, SVG and XHTML. Where you can identify on
> > concept
> > > as
> > > > > being part or a conceptual zooming in of another section of
content.
> > > > >
> > > > > At the moment we can do this with RDF, but it would be much easer
to
> > > > promote
> > > > > if the languages themselves supported it.
> > > > >
> > > > > Lisa
> > > > >
> > > > >
> > > > >
> > > > > ----- Original Message ----- 
> > > > > From: "david poehlman" <david.poehlman@handsontechnologeyes.com>
> > > > > To: "Lisa Seeman" <lisa@ubaccess.com>; <wai-xtech@w3.org>; "Will
> > > Pearson"
> > > > > <will-pearson@tiscali.co.uk>
> > > > > Cc: <oedipus@hicom.net>
> > > > > Sent: Tuesday, November 23, 2004 3:16 PM
> > > > > Subject: Re: Keyboard Navigation For Document Exploration In SVG
1.2
> > > > >
> > > > >
> > > > > > Lisa,
> > > > > >
> > > > > > It is possible with anything to get lost, but it is also quite
> > > possible
> > > > > for
> > > > > > people who have a good memory of spatial things such as myself
and
> > > > > possibly
> > > > > > will and many others that this would be a usefull tool.  AS to
> where
> > > it
> > > > > fits
> > > > > > in the scheeme of things with respect to ua, at or svg spec is
> > > something
> > > > > to
> > > > > > be hashed out but keyboard exploration of diagrams needs to be
> > enabled
> > > > for
> > > > > > without it, we are lost.
> > > > > >
> > > > > > It would be interesting to hear Gregory's thoughts, I do think
> > though
> > > > that
> > > > > > there is a good deal of research behind the possibilities of
this
> > > > working
> > > > > > though.
> > > > > >
> > > > > > Johnnie Apple Seed
> > > > > >
> > > > > > ----- Original Message ----- 
> > > > > > From: "Lisa Seeman" <lisa@ubaccess.com>
> > > > > > To: <wai-xtech@w3.org>; "Will Pearson"
> <will-pearson@tiscali.co.uk>
> > > > > > Cc: <oedipus@hicom.net>
> > > > > > Sent: Tuesday, November 23, 2004 1:51 AM
> > > > > > Subject: Re: Keyboard Navigation For Document Exploration In SVG
> 1.2
> > > > > >
> > > > > >
> > > > > > My concern is that you would get terribly lost.
> > > > > >
> > > > > > But is anyone thinks this might be useful, and could do it ,  it
> > would
> > > > be
> > > > > > Gregory Rosmaiter. So I am cc'ing him.
> > > > > > I will also try and ask him.
> > > > > >
> > > > > > Keep well
> > > > > > L
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >   ----- Original Message ----- 
> > > > > >   From: Will Pearson
> > > > > >   To: wai-xtech@w3.org
> > > > > >   Sent: Monday, November 22, 2004 10:38 PM
> > > > > >   Subject: Keyboard Navigation For Document Exploration In SVG
1.2
> > > > > >
> > > > > >
> > > > > >   Hi;
> > > > > >
> > > > > >   At the moment there's no clear indication within the spec that
> > > > document
> > > > > > exploration should be made available through a ua's keyboard
> > > interface.
> > > > > > Whilst most people will be able to visually explore the image,
> this
> > > > won't
> > > > > be
> > > > > > possible for some users, and may not be possible for others.
> > > Therefore,
> > > > I
> > > > > > would like to suggest that some form of navigation between
> container
> > > > > > elements and graphic elements be recommended as a guideline for
ua
> > > > > > developers.  This should facilitate exploration of the document
> away
> > > > from
> > > > > > any elements with 'focusable' set to true, or active elements
with
> > > > > > 'focusable' set to auto.
> > > > > >
> > > > > >   Ideally, this would be based on spatial direction, thus
allowing
> > the
> > > > > user
> > > > > > to build up a mental model of the spatial relationships between
> > > > elements.
> > > > > >
> > > > > >   The spec already makes provision for a range of alternative
> > pointing
> > > > > > devices, through DOM 3 I think, but I think we need something a
> bit
> > > more
> > > > > > granular than a pixel by pixel movement typically offered by
> > pointing
> > > > > > devices.  The main reason for this, is that the HCI task
analysis
> > for
> > > > > moving
> > > > > > two points require the user to know where the pointer is in
> relation
> > > to
> > > > > the
> > > > > > target.  This can be done with speech, and there's an event in
> JAWS
> > to
> > > > > > handle this, but having experimented with this on a small number
> of
> > > > users,
> > > > > > doing the math necessary to work out the relationship between
> > pointer
> > > > and
> > > > > > target raised the cognitive workload, as measured by the
NASA-TLX
> > > test,
> > > > > > quite significantly.
> > > > > >
> > > > > >   So, I propose the following eight keys to facilitate document
> > > > > exploration
> > > > > > within a ua:
> > > > > >                              I.       Up (337.5º - 22.5º)
> > > > > >
> > > > > >                            II.      Diagonally up and right
> (22.5º -
> > > > > 67.5º)
> > > > > >
> > > > > >                           III.       Right (67.5º - 112.5º)
> > > > > >
> > > > > >                        IV.       Diagonally down and right
> (112.5º -
> > > > > 157.5º)
> > > > > >
> > > > > >                          V.       Down (157.5º - 202.5º)
> > > > > >
> > > > > >                        VI.       Diagonally down and left
> (202.5º -
> > > > > 247.5º)
> > > > > >
> > > > > >                       VII.       Left (247.5º - 292.5º)
> > > > > >
> > > > > >                     VIII.      Diagonally left and up (292.5º -
> > > 337.5º)
> > > > > >
> > > > > >
> > > > > >
> > > > > >   Each of these keys will be responsible for moving to the
nearest
> > > > element
> > > > > > within a 45º arc, as listed above.
> > > > > >
> > > > > >   Will
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > >
> > >
> > >
> >
> >
> >
>
>
>

Received on Sunday, 28 November 2004 15:48:52 UTC