Re: Video filter

Right, developing something more akin to the web audio api for video would
be a nice new piece of work. Pretty substantial, too, of you wanted to
build in all the feature extraction approaches typical in video analysis!

Best Regards,
Silvia.
On 15 Apr 2015 20:21, "Rob Manson" <robman@mob-labs.com> wrote:

> You can process these video frames using the GPU via WebGL as well (e.g.
> in fragment shaders and soon compute shaders). We're using this for all
> sorts of frame analysis and computer vision. WebCL is another option but
> not widely adopted yet...but a very promising area for this.
>
> Yet there is currently a hole in the Web Platform related to VideoStream
> Post Processing.
>
> We're working through some of these issues in the work for the Depth
> Extension as it directly relates to using calibrated streams...but Video
> Post Processing really does need some focused effort and love on it's own.
>
> One big issues here is that we're forced to use rAF or setInterval to
> collect the channel/pixel data.
>
> Ideally we'd be able to listen for an event that is fired when new frame
> data is decoded and available. Initially everyone thinks you could use the
> "timeupdate" event - but the spec says it should only run at about once per
> 250ms 8(
>
> Ideally we'd have a WebVideo API that's roughly analogous to the WebAudio
> API. One that pre-bakes a lot of common Video Post Processing (and Computer
> Vision) functions into a web based API. There's also now the OpenVX[1]
> standard that this work could build on top of too.
>
> Bits and pieces of this activity are occurring in different areas across
> different groups - but it would be great to pull them together into a
> concerted effort. Or at least a concerted discussion.
>
> roBman
>
> [1] https://www.khronos.org/openvx/
>
>
>
> On 15/04/15 6:43 PM, Dominique Hazael-Massieux wrote:
>
>> On 13/04/2015 22:44, Silvia Pfeiffer wrote:
>>
>>> > Video filters are being written using the canvas api these days. You
>>> capture frames into the canvas and then can do whatever you like with
>>> the pixels. This can also be done with live video. Is that not
>>> sufficient? What is the use case?
>>>
>>
>> As far as I know, using canvas for video manipulation is OK when the said
>> manipulation is achievable on the CPU, but a lot of manipulation can only
>> be reasonably achieved with the help of the GPU.
>>
>> As graphic cards bring a lot of hardware-baked video manipulation
>> features, a Web API that would enable to use these on video streams would
>> be useful.
>>
>> WebCL is a contender in this space, but I don't think it has gained much
>> traction so far: https://www.khronos.org/webcl/
>>
>> Dom
>>
>>
>>
>>
>

Received on Wednesday, 15 April 2015 11:44:25 UTC