- From: W3C Community Development Team <team-community-process@w3.org>
- Date: Fri, 8 Nov 2019 11:11:10 +0000
- To: public-webvmt-cg@w3.org
A machine learning algorithm needs to be trained to recognise cats and dogs from video footage. The learning process can be accelerated if the training footage is manually tagged to classify timed sections of video when cats and dogs appear, which can be done in a common metadata format with the proposed data sync feature in WebVMT using the following excerpt: NOTE Cat, top left, after 5 secs for 20 secs00:00:05.000 —> 00:00:25.000{“sync”: {“type”: “org.ogc.geoai.catdog”, “data”:{“animal”:”cat”, “frame-zone”:”top-left"}}}NOTE Dog, mid right, after 10 secs for 30 secs00:00:10.000 —> 00:00:40.000{“sync”: {“type”: “org.ogc.geoai.catdog”, “data”:{“animal”:”dog”, “frame-zone”:”middle-right"}}} This approach is applicable to any project using video as input to a machine learning algorithm, regardless of the video encoding format, e.g. MPEG, WebM, OGG, etc. and without modifying the video files themselves. In addition, video metadata can be exposed in a web browser using the proposed DataCue API in HTML. ---------- This post sent on Web Video Map Tracks (WebVMT) Community Group 'Classifying Video Training Data For Machine Learning Using WebVMT' https://www.w3.org/community/webvmt-cg/2019/11/08/classifying-video-training-data-for-machine-learning-using-webvmt/ Learn more about the Web Video Map Tracks (WebVMT) Community Group: https://www.w3.org/community/webvmt-cg
Received on Friday, 8 November 2019 11:11:12 UTC