last part of todays telecon

Here is the results of the beginning of our review of the guidelines.



We began by reviewing the guidelines one at a time to determine whether
or not:

1.	they met the “80% or better” (80%+) objectivity criterion

2.	the success criterion were sufficient (that is, there was
nothing else that was needed besides the success criteria)

3.	the criteria were necessary (e.g., that all of the items listed
under the criteria had to be done in order for the checkpoint to be met)



We then also checked to see whether or not the success criteria were:

1.	machine testable
2.	human testable

We also made any other notes that we found in the process.

For number 1, "Provide a text equivalent for all non-text content”, we
found:
•	We believed it would pass the  “80%+” objective test
•	The criteria appeared to be sufficient
•	The criteria appeared to be necessary
•	Criteria 1 was machine testable
•	Criteria 2 and 3 would be human testable.

We also felt that there was a need to combine 2 and 3, since they really
form an "or" statement.  You needed to do the 2 or 3.  You did not have
to do both 2 and 3.

We also noted that 3 needed to be cleaned up.  Currently, it says that
if you can't put on a proper ALT TEXT, you need to use a label.  This
sounds like a different construct than ALT TEXT when, in fact, it merely
means that if you have something like a picture of the Mona Lisa or a
symphony, that the ALT TEXT would simply have to name the object rather
than try to fulfill the same function as the playing of a symphony, etc.
It was also felt that an example of what was intended (e.g., picture of
Mona Lisa or sound file of a symphony) so that people had some idea what
was being referred to.


For guideline number 2, "Provides synchronized media equivalence for
time dependent presentations, we found:

•	We believed items 1 and 2 would pass the 80%+ objectivity test

•	Item 3 needs work and maybe should only apply to non-live
presentations, since in live presentations the tolerance must be much
broader to account for different situations.  In a movie, one would want
the captions to very closely follow the dialogue.  In a live event with
scientific content, it may be more important that the captions be
checked and corrected before being passed on so that scientific terms,
etc. were not hopelessly scrambled.

•	The first sentence in item 4 is a statement not a checkpoint.
The second sentence will seem to be problematic.  Does this mean that
all webcams would have to be removed from sites unless someone was
provided to provide a running narration of the webcam?  (e.g., Webcams
that show a view out a window, webcams that are pointed at a building to
show its ongoing construction, webcams pointed at a coffeepot?, etc.)
What if a webcam is used to provide a view to people in the inside of a
building in the same way that a window is used to provide a view for
others. Would it need to be described in real time, or would it be
sufficient to say that it was a view out the west side of the building?


-- ------------------------------ 
Gregg C Vanderheiden Ph.D. 
Professor - Human Factors 
Dept of Ind. Engr. - U of Wis. 
Director - Trace R & D Center 
Gv@trace.wisc.edu <mailto:Gv@trace.wisc.edu>, <http://trace.wisc.edu/> 
FAX 608/262-8848  
For a list of our listserves send “lists” to listproc@trace.wisc.edu
<mailto:listproc@trace.wisc.edu> 

Received on Thursday, 29 November 2001 18:38:30 UTC