- From: Charles Pritchard <chuck@jumis.com>
- Date: Thu, 28 Jul 2011 21:09:44 -0700
- To: David Singer <singer@apple.com>
- CC: "public-canvas-api@w3.org" <public-canvas-api@w3.org>
In reading my last reply -- I couldn't very well. I did try to address the thought experiment you were carrying out. With all sincerity, I would with you, develop an interface in which a non-sighted person could carry out tasks with a robotic instrument in surveying a bio-reactor and implementing an experiment. In making something available to the broadest range of peoples, I would require that a canvas interface be available. There is nothing in your description nor my understanding of the experiment that leads me to believe that such an interface could not be accomplished with current hardware, operating system and associated technologies. You've posed a question: how does a user select an instrument to draw fluids, and push those fluids into another container? How do we capture the content of that interaction... for science or other human means. Look at WCAG, and you will find the requirements of such an experiment. Prescriptive linguistics have limitaitons; that's the unfortunate situation we face when someone rebuts existing applications, such as mine, and asks, "what are the use cases", and states in responses "you are doing it wrong". Again, if you want to develop bio-genetic interfaces with me, I'm am open to that. I think we can do it! I know that -every- visual description posed can be translated to another human, in non-visual terms, in as much as is necessary to make science happen. -Charles
Received on Friday, 29 July 2011 04:10:20 UTC