- From: Jonathan Chetwynd <j.chetwynd@btinternet.com>
- Date: Fri, 30 May 2003 06:57:17 +0100
- To: saylordj@WellsFargo.COM
- Cc: w3c-wai-eo@w3.org
Doyle, Should we invest in a couple of pairs of eyetops? http://www.eyetop.net not the most accessible site, with javascript disabled you get nothing rave review in yesterday's UK Guardian: http://www.guardian.co.uk/online/story/0,3605,965531,00.html Jonathan On Wednesday, May 28, 2003, at 05:57 pm, saylordj@WellsFargo.COM wrote: > Hello Jonathan, > Well I'm strongly in agreement that we need to consider images in new > ways. > > You write, > Sadly peepo is very nearly a lone voice in the area of providing a W3C > accessible virtual space, even though in a very limited sense. However > there > are many excellent VRML, flash and other attempts. > Accessible SVG is also an extremely rare commodity :-( > to resolve the red bus problem will also require excellent and > transparent authoring tools. > > ... > > People with SLD simply don't generalise or abstract in the > way described, one might go so far as to say that this is one > definition of a LD. Naturally if we could find an abstract pointer the > issue would be resolved, but we can't. So this leads to much confusion, > little of which is resolved by text based discussion. > > Doyle, > This remark reminds me that with visual dyslexia some people can have a > great deal of trouble with seeing depth in the landscape. So the > integrative part of seeing 'wholeness' is a difficulty for them. > > Jonathan, > It would be great if you could illustrate some of your discussion. > > Doyle, > I shot a video of walking on a hillside with a dyslexic person. They > had > trouble with navigating the hillside compared to me, and had to stare > at the > ground as we walked. I think this gives us a variety of entry points > to > consider accessibility and imagery. > > If we constructed something on line one could carry a laptop into the > fields > where a disabled person might go for a walk and make that usable for > the > dyslexic person outdoors. > > I'll describe what I think are important features of images used in > the real > world. We need to do the equivalent of googling the frames of images > so we > can pull up a relevant section of images when needed while walking. > That > also requires some degree of not just geographic notation that > identifies a > spot, but also orientation of the body in space. > > One can project a movie onto the landscape, but in many ways putting a > movie > directly on top of the world masks what is in the world. So I would > prefer > to have standard means of attaching a movie to the landscape to the > side of > what I am looking at. Or to be able to see through an image into the > real > landscape so that the ambiguity between landscape and image are not > confusing me. > > Another way I could illustrate this would be to take a blind from birth > person who hasn't learned certain visual orientations in the world and > I > would put the blind person into a parking lot (a common trial for blind > people trying to navigate in the world, but a learning disabled person > faces > similar problems) and I would make websites for them to navigate > interactively in the parking lot. What I am trying to learn there is > how a > blind person learns the landscape without experience seeing > landscapes. And > how that would translate into using a image based website that could be > accessible in that sort of way. > > Another way I would consider making illustrations is with eye tracking > devices that can give a better sense of attention structure that some > individual brings to a given set of data. When one is in the world > with for > example some form of attention deficit, that the imagery is not enough > but > how can that imagery be used by the person. > > So I think there are several areas to explore this way. > > 1. Linking the imagery to a specific space, > > 2. Addressing the imagery ambiguity between the image and real space. > > 3. Finding the imagery when needed in real time. > > 4. Customizing imagery for the attention structure of a given person. > > All of this could be put on line as a starting place for doing further > work. > Doyle > > > > > > > Doyle Saylor > Business Systems Consultant > Intranet Hosting Services > Wells Fargo Services Corporation > > > > -----Original Message----- > From: Jonathan Chetwynd [mailto:j.chetwynd@btinternet.com] > Sent: Wednesday, May 28, 2003 2:32 AM > To: saylordj@WellsFargo.COM > Cc: w3c-wai-eo@w3.org > Subject: Re: Cognitive Disabilities > > Doyle, > > whilst this snippet* is still text it has the additional benefit that > it is rendered as a specific representation that maybe tested. > It would be great if you could illustrate some of your discussion. > > Doyle wrote: > We don't want an icon in my view to look like an object we want an icon > to > stick to a sparrow in such a way that it touches all birds and we know > that. > That is what written words do. They stick to an object they don't > resemble > the object. That resolves what you referred to in the example of the > red > bus and green bus, is the connectedness issue between expressions such > that > the green and red refer to the same bus. > > Unfortunately this is the nub of the issue, and it is far from > resolution. People with SLD simply don't generalise or abstract in the > way described, one might go so far as to say that this is one > definition of a LD. Naturally if we could find an abstract pointer the > issue would be resolved, but we can't. So this leads to much confusion, > little of which is resolved by text based discussion. > > Doyle wrote: We want to build in a connection process. > > Reality fortunately has many useful pointers, loo signs for instance. > However these are currently missing in virtuality, and so for the > present it is necessary to augment virtuality, before it will help > augment reality, the HUD has to highlight the pylon before the pilot > can avoid it. > > Sadly peepo is very nearly a lone voice in the area of providing a W3C > accessible virtual space, even though in a very limited sense. > However there are many excellent VRML, flash and other attempts. > Accessible SVG is also an extremely rare commodity :-( > to resolve the red bus problem will also require excellent and > transparent authoring tools. > > Perhaps we can use your expertise to create some other useful visual > examples with SVG. > > Jonathan > > > *below, please not I've had problems rotating as well as translating, > if anyone has a better one thanks: > > <?xml version="1.0" encoding="iso-8859-1"?> > <!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 20001102//EN" > "http://www.w3.org/TR/2000/CR-SVG-20001102/DTD/svg-20001102.dtd"> > <svg> > <title>circleanimation.svg</title> > <circle id="circle2" > style="stroke-width:5;stroke:blue;fill:none" cx="60" cy="60" r="60"> > <animate attributeName="cx" values="0;1400" dur="3s" > repeatCount="indefinite" > onrepeat="advance(evt)"/> > </circle> > </svg> >
Received on Friday, 30 May 2003 01:53:52 UTC