- From: Denis Anson <danson@miseri.edu>
- Date: Wed, 30 Dec 1998 22:03:59 -0500
- To: "Jon Gunderson" <jongund@staff.uiuc.edu>, <w3c-wai-ua@w3.org>
In a recent message, it was intimated that IE would be in the category of technologies that rendered text as well as video. But, at last I think I understand that IE does not, in fact, render video. An embedded object (e.g. RealPlayer or Microsoft's player or Quicktime) plays the video. However, there is a very real problem that occurs here. We want the user to be able to use keyboard commands to control frame rate, to pause, etc. We also want the host program (such as IE) to have keyboard navigation commands. If IE doesn't know that the player is there, how does it know to pass keyboard commands to the player? Does there have to be some sort of command hierarchy, ala HyperCard, where unused keyboard events are passed down (or bubble up)? Hmmm.... As I understood the general discussion at the F2F, any agent that renders any type of material is responsible for making that material accessible. Thus, a small screen display (such as my Palm III), when connected to the web, might render text in a different fashion than a large screen display, but since both are rendering text, both must make it accessible. An audio UA, whether designed for use by a blind user, or over a telephone, would still have to make the text that is rendered accessible, as per our guidelines. Any agent that renders video would have to make that accessible. If the video were being rendered in an audio browser, it might be providing visual descriptive text. That would be rendering the video. The only way to avoid the responsibility for accessible content would be to ignore it entirely. This suggests the general rule of thumb: If a UA makes a type of content accessible to one group, it must, as far as possible, make it accessible to all groups, either natively, or via third party AT. Does this rule make sense to anyone other than me? Denis Anson
Received on Wednesday, 30 December 1998 22:07:11 UTC