W3C home > Mailing lists > Public > semantic-web@w3.org > June 2021

Re: WebVTT+JS and WebVTT+RDF

From: Silvia Pfeiffer <silviapfeiffer1@gmail.com>
Date: Sun, 27 Jun 2021 09:18:43 +1000
Message-ID: <CAHp8n2kxMfR7UCc0nOoU_zPcVbjtr_xm5Emo3SsKQEEKaLUxoQ@mail.gmail.com>
To: Adam Sobieski <adamsobieski@hotmail.com>
Cc: public-texttracks@w3.org, semantic-web@w3.org
Hi Adam,

So are your use cases within a web context our outside of it?

Cheers,
Silvia.


On Sun, Jun 27, 2021, 5:19 AM Adam Sobieski <adamsobieski@hotmail.com>
wrote:

> Silvia,
>
>
>
> Envisioned use cases for WebVTT+JS include interactive video. I am
> interested in educational scenarios for interactive video.
>
>
>
> With new open standards for interactive video, interactive videos would be
> readily authored, self-contained, portable, secure, accessible,
> interoperable, readily analyzed, and readily indexed and searched.
>
>
>
> A solution for interactive video involves placing JavaScript scripts
> and/or WebAssembly modules in video containers. This line of thinking
> resulted in the idea of putting JavaScript lambda expressions in WebVTT
> tracks.
>
>
>
> As for WebVTT+RDF, new open standards could be useful for providing
> extensible dynamic metadata for biomedical, scientific, and industrial
> sensors and devices (e.g., digital microscopes).
>
>
>
> Expanding upon these WebVTT+RDF ideas, we might also consider “graph
> video” concepts, which could add keyframes, also known as “intra-frames”,
> to facilitate efficient seeking through videos. That is, instead of an
> entire RDF graph at the beginning of a video track followed by “RDF diffs”
> or “RDF deltas” throughout the remainder of the track, a video track could
> provide entire RDF graphs once in a while, at keyframes or “intra-frames”,
> and provide storage-efficient “RDF diffs” or “RDF deltas” between these
> keyframes or “intra-frames”. Then, one could more efficiently seek through
> recorded videos while also having access to instantaneous metadata.
>
>
>
> The context that these WebVTT+RDF ideas occurred in is that I am in the
> midst of proposing some standards work to MPAI (
> https://mpai.community/standards/mpai-mcs/) to ensure that live streams
> and recordings from biomedical, scientific, and industrial sensors and
> devices can be utilized in mixed-reality collaborative spaces (such as
> applications built using Microsoft Mesh). Interoperability with machine
> learning and computer vision technologies is also being considered.
>
>
>
>
>
> Best regards,
>
> Adam
>
>
>
> *From: *Silvia Pfeiffer <silviapfeiffer1@gmail.com>
> *Sent: *Friday, June 25, 2021 9:00 PM
> *To: *Adam Sobieski <adamsobieski@hotmail.com>
> *Cc: *public-texttracks@w3.org; semantic-web@w3.org
> *Subject: *Re: WebVTT+JS and WebVTT+RDF
>
>
>
> Hi Adam,
>
>
>
> WebVTT has been built to be flexible for this kind of time aligned data,
> so you should be able to use it for that.
>
>
>
> What are the use cases behind this? What is your motivation? Are you
> suggesting new standards be developed?
>
>
>
> For example, the cue entry and cue exit JavaScript is already possible
> when on a web page, no new standards necessary.
>
>
>
> Is the microscope use cases big enough to create a standard for or is it
> just for a research piece or a company's proprietary solution?
>
>
>
> Cheers,
>
> Silvia.
>
>
>
>
>
> On Sat, Jun 26, 2021, 5:26 AM Adam Sobieski <adamsobieski@hotmail.com>
> wrote:
>
> Semantic Web Interest Group,
>
> Web Media Text Tracks Community Group,
>
>
>
> Hello. I would like to share some thoughts on WebVTT+JS and WebVTT+RDF.
>
>
> Timed Lambda Expressions (WebVTT+JS)
>
>
>
> The following syntax example shows a way of embedding JavaScript in WebVTT
> tracks. The example provides two lambda functions for a cue, one to be
> called when the cue is entered and the other to be called when the cue is
> exited.
>
>
>
> 05:10:00.000 --> 05:12:15.000
>
> enter:()=>{...}
>
> exit:()=>{...}
>
>
> Dynamic Graphs (WebVTT+RDF)
>
>
>
> An example scenario for dynamic metadata is that of live streams and
> recordings from digital microscopes. In the scenario, dynamic metadata
> includes, but is not limited to, an instantaneous magnification scale and
> instantaneous time scale. Such metadata about the live streams and
> recordings from digital microscopes would be desirable to have including
> for machine learning and computer vision algorithms.
>
>
>
> “RDF diffs” [1], or “RDF deltas” [1], could be utilized with WebVTT for
> representing timed changes to semantic graphs and such approaches could be
> useful for representing extensible and dynamic metadata about live steams
> and recordings from biomedical, scientific, and industrial sensors and
> devices.
>
>
>
>
>
> Best regards,
>
> Adam Sobieski
>
> http://www.phoster.com
>
>
>
> *References*
>
> [1] https://www.w3.org/2012/ldp/wiki/LDP_PATCH_Proposals
>
>
>
> *See also*
>
> https://github.com/pchampin/linkedvtt
>
> https://github.com/afs/rdf-delta
>
>
>
>
>
Received on Saturday, 26 June 2021 23:19:09 UTC

This archive was generated by hypermail 2.4.0 : Tuesday, 5 July 2022 08:46:09 UTC