Silver XR: Functional outcome Technique/test for Captions and Meta Data

Hi Jeanne and all,

Here is my draft technique and test for text descriptions/meta data. I 
agree with Charles point that having this (or other similar) simple 
mechanism would give us something to use that is practical - and also to 
Bruces point that we run the risk of designing the 'meta-data' 
ourselves. So we need to define what meta data we are talking about IMO.

F0# Captions Technique: Provide support for text descriptions of sound 
effects

These could be marked up as being audio descriptions and potentially 
passed as arguments for transformation to symbols or other alternate 
formats:

<example>

<scene> Humphrey Bogart sits at desk and phone rings </scene>

<scene> [Ringing phone]</scene>

NOTE: The use of square brackets denotes this text as an audio 
description of the action in the scene.

<scene> Humphrey Bogart answers the phone.</scene>

<scene> [Lauren Bacall says:  You're making dinner tonight]</scene>

<scene>Humphrey Bogart says: No way,  someones gotta pay the bills</scene>

<scene> [Gun shot]</scene>

<scene> Humphrey drops phone from hand</scene>

<scene> [Lauren Bacall says: No dinner tonight for you my dear. </scene>

</example>

F0# Captions Test:

#1: There exists a text description of sound effects.
#2: These text descriptions are marked up using square brackets.
#3: If #1 and #2 are true.
etc

We could also add a new Outcome:

Outcome 2: We need text descriptions of sound effects.
Outcome 3: We need more advanced meta data descriptions of sound 
effects. [1]

HTH

Josh

[1] 
https://w3c.github.io/silver/subgroups/xr/captioning/functional-outcomes.html 


-- 
Emerging Web Technology Specialist/Accessibility (WAI/W3C)

Received on Monday, 17 August 2020 14:09:35 UTC