W3C home > Mailing lists > Public > public-webvmt-cg@w3.org > November 2019

Classifying Video Training Data For Machine Learning Using WebVMT [via Web Video Map Tracks (WebVMT) Community Group]

From: W3C Community Development Team <team-community-process@w3.org>
Date: Fri, 8 Nov 2019 11:11:10 +0000
To: public-webvmt-cg@w3.org
Message-ID: <5b84a71b4934f0dff652e2a499dfcc19@www.w3.org>
A machine learning algorithm needs to be trained to recognise cats and dogs from video footage. The learning process can be accelerated if the training footage is manually tagged to classify timed sections of video when cats and dogs appear, which can be done in a common metadata format with the proposed data sync feature in WebVMT using the following excerpt:

NOTE Cat, top left, after 5 secs for 20 secs00:00:05.000 —> 00:00:25.000{“sync”: {“type”: “org.ogc.geoai.catdog”, “data”:{“animal”:”cat”, “frame-zone”:”top-left"}}}NOTE Dog, mid right, after 10 secs for 30 secs00:00:10.000 —> 00:00:40.000{“sync”: {“type”: “org.ogc.geoai.catdog”, “data”:{“animal”:”dog”, “frame-zone”:”middle-right"}}}

This approach is applicable to any project using video as input to a machine learning algorithm, regardless of the video encoding format, e.g. MPEG, WebM, OGG, etc. and without modifying the video files themselves.

In addition, video metadata can be exposed in a web browser using the proposed DataCue API in HTML.


This post sent on Web Video Map Tracks (WebVMT) Community Group

'Classifying Video Training Data For Machine Learning Using WebVMT'


Learn more about the Web Video Map Tracks (WebVMT) Community Group: 

Received on Friday, 8 November 2019 11:11:12 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:25:18 UTC