[webvtt] WebVTT Subtitle Video "Multiple Annotations", Customized language, and Multiple Events (#486)

njss has just created a new issue for https://github.com/w3c/webvtt:

== WebVTT Subtitle Video "Multiple Annotations", Customized language, and Multiple Events ==
I am deeply interested in using VTT subtitles to annotate videos from experiments.
Two of the main functionalities needed are:

- the possibility of defining a "new" custom language besides using the standard set (English, Portuguese, ...). The reason is that I might need to code my annotations and translate them later, on the fly, from a database to a standard/normal descriptive subtitle text language
- The need to encode multiple ongoing actions (for example to describe multiple body movements currently being performed on the video by the actors)
- The need to play this customized language encoded annotations (subtitle) and create on the fly the real subtitle text that should be played in different languages, and the need to capture events from the multiple body parts annotations in real-time so that I can associate these events (and coded annotations) with sensor data corresponding to the annotated events

I did a small test, by creating a VTT player in HTML5 and capture the events, it seems to work, however, I would like to be sure of the best way to address this use case (for which I strongly believe that VTT would give a great contribution, for example, to develop a learning system that also can make use of human behaviour sensing in real-time)

Apologies if here is not the best place to ask these questions.
Any comment will be highly appreciated!

Thank you,

Nelson


Please view or discuss this issue at https://github.com/w3c/webvtt/issues/486 using your GitHub account

Received on Tuesday, 16 June 2020 09:23:09 UTC