W3C home > Mailing lists > Public > semantic-web@w3.org > June 2021


From: Adam Sobieski <adamsobieski@hotmail.com>
Date: Tue, 29 Jun 2021 05:00:34 +0000
To: Pierre-Antoine Champin <pierre-antoine@champin.net>, "public-texttracks@w3.org" <public-texttracks@w3.org>, "semantic-web@w3.org" <semantic-web@w3.org>
Message-ID: <CH2PR12MB418460E9B1D18BAA96392632C5029@CH2PR12MB4184.namprd12.prod.outlook.com>

Thank you for the interesting publication.

It could be that semantics about video contents and any corresponding “diffs” or “deltas” are efficiently represented in JSON-LD.

The publication that you shared discusses video annotation by and editor software for users; the approaches that I am considering appear to need computer vision algorithms in the loop, e.g., image and video segmentation. Instead of spatial fragments (rectangles), I am considering arbitrarily-shaped and URI-addressable silhouettes (e.g., in secondary video tracks).

The solutions presented in the publication that you shared work with today’s video formats. The solutions that I am describing could require a new video format.

While I am still brainstorming about compelling use cases and examples, one example involves an interactive video of an automobile engine, under the hood. Envision a secondary video track where there is a colored silhouette for each part of the engine. While the primary video track would be visible to a user, using the secondary video track(s), the user could, with their mouse, hover over and click on the parts of the engine.

The silhouettes would be URI-addressable for purposes including semantic tracks. In the example, a semantic track could describe the automobile engine and its parts.

Best regards,

P.S.: My interest in interactive video stems from my research pertaining to uses of interactive stories (e.g., digital gamebooks, interactive films, and serious games) in character education: http://www.phoster.com/discussions/interactive-storytelling-and-adaptive-instructional-systems-for-character-education/ . The gist of the hyperlinked-to (rough draft) article is that interactive stories, perhaps interactive films, could be exercises and activities in character education courses.

P.P.S: https://github.com/WICG/proposals/issues/33

From: Pierre-Antoine Champin<mailto:pierre-antoine@champin.net>
Sent: Monday, June 28, 2021 1:12 PM
To: Adam Sobieski<mailto:adamsobieski@hotmail.com>; public-texttracks@w3.org<mailto:public-texttracks@w3.org>; semantic-web@w3.org<mailto:semantic-web@w3.org>
Subject: Re: WebVTT+JS and WebVTT+RDF


about WebVTT + RDF you might be interested in a paper we published a couple of years ago at LDOW


Note that in our case, we are not conveying RDF diffs or deltas, but full-fledged RDF payloads, encoded as JSON-LD.


On 25/06/2021 21:25, Adam Sobieski wrote:
Semantic Web Interest Group,
Web Media Text Tracks Community Group,

Hello. I would like to share some thoughts on WebVTT+JS and WebVTT+RDF.

Timed Lambda Expressions (WebVTT+JS)

The following syntax example shows a way of embedding JavaScript in WebVTT tracks. The example provides two lambda functions for a cue, one to be called when the cue is entered and the other to be called when the cue is exited.

05:10:00.000 --> 05:12:15.000

Dynamic Graphs (WebVTT+RDF)

An example scenario for dynamic metadata is that of live streams and recordings from digital microscopes. In the scenario, dynamic metadata includes, but is not limited to, an instantaneous magnification scale and instantaneous time scale. Such metadata about the live streams and recordings from digital microscopes would be desirable to have including for machine learning and computer vision algorithms.

“RDF diffs” [1], or “RDF deltas” [1], could be utilized with WebVTT for representing timed changes to semantic graphs and such approaches could be useful for representing extensible and dynamic metadata about live steams and recordings from biomedical, scientific, and industrial sensors and devices.

Best regards,
Adam Sobieski

[1] https://www.w3.org/2012/ldp/wiki/LDP_PATCH_Proposals

See also
Received on Tuesday, 29 June 2021 05:01:02 UTC

This archive was generated by hypermail 2.4.0 : Tuesday, 5 July 2022 08:46:09 UTC