Re: Video filter

Hi CTai,

FoxEye looks like an awesome project and provides a excellent proof of 
concept for general Computer Vision on the Web Platform!

It would be great if you could contribute to the Use Cases page on the 
W3C wiki I've setup and I've included a reference to FoxEye as an 
early/leading implementation in this area.  I'm assuming that because 
you responded to this list that you're interested in your project 
contributing to or providing the groundwork for the development of a W3C 
API/spec?

You say in your project's conclusion:

   " This project is not a JavaScript API version for OpenCV. It is a 
way to let web developer do image processing and computer vision works 
easier."

It would be great to find a middle ground here for a open web API - so 
we don't start in a completely app/domain specific way (extensibleweb 
style). Then people could quickly and easily prototype specific apps on 
top of this while we explore what should be standardised.

Love to hear your thoughts?

roBman



On 17/04/15 5:57 PM, Chia-Hung Tai wrote:
> Hi, there,
> My colleague and I recently are working a project called FoxEye [1]. 
> You can see the detail in the link [1] and [2] for more details. Since 
> we still actively working on the prototype and implementation for this 
> project, we might modify the content frequently.
> If you are interested in the progress in Firefox part, feel free to 
> follow the bug[3] in bugzilla.
> Basically, the FoxEye project consists below blocks.
> 1. VideoWorker which associate with MediaStreamTrack[4], it is an 
> extension for Media Capture and Stream.
> 2. ImageBitamp and an extension for mapping image data into an 
> ArrayBuffer[5]
> 3. WebImage: a Web API for HW accerlated features
> 4. OpenCV.js: an asm.js version of OpenCV
>
> So, how the FoxEye works?
> Take two basic cases as example:
> 1. Input as media stream:
> The developer can write a VideoWorker and hook it into an 
> MediaStreamTrack(MST). The VideoWorker will receive the events for 
> every frame from that MST. If the developer want to "process" the 
> frame and show the frame by frame result, then he/she should take the 
> worker as processor. Else-wise, he can just take the worker as a 
> monitor to analysis frame by frame image data. The event take 
> ImageBitmap as the input/output handle. Which leave some optimism 
> space for the case of WebGL.
>
> 2. Input as an image:
> The developer can use ImageBitmap to map the raw buffer into an 
> ArrayBuffer. Then he/she can manipulate the pixels in JavaScript. Also 
> the developer can pass the ImageBitmap to OpenCV.js, too. This 
> empowers a lot of potentials to the Web.
>
> Current status:
> We are working are below tasks to evaluate the proposal.
> 1. Implement VideoWorker
> 2. Implement ImageBitmap and extension
> 3. Create JavaScript API for OpenCV.js
>
> Next phase:
> 1. WebGL integration
> 2. WebImage invistagation
> 3. OpenCV.js optimization
>
> Right now we are still in preliminary stage. Would like to hear any 
> suggestions and comments from you. Feel free to contact me if you have 
> any questions.
>
> [1]: https://wiki.mozilla.org/Project_FoxEye
> [2]: https://www.youtube.com/watch?v=TgQWEWiGaO8
> [3]: https://bugzilla.mozilla.org/show_bug.cgi?id=1100203
> [4]: https://bugzilla.mozilla.org/show_bug..cgi?id=1108950 
> <https://bugzilla.mozilla.org/show_bug.cgi?id=1108950>
> [5]: https://bugzilla.mozilla.org/show_bug.cgi?id=1141979
> [6]: https://github.com/CJKu/opencv
>
> Best Regards,
> CTai
>
> 2015-04-16 17:48 GMT+08:00 Harald Alvestrand <harald@alvestrand.no 
> <mailto:harald@alvestrand.no>>:
>
>     On 04/16/2015 01:08 AM, Rob Manson wrote:
>
>         There's definitely common and useful functions that could be
>         optimised for video post processing.
>
>         I don't think we need to boil the ocean and birth the full
>         equivalent of Web Audio 8)
>
>         Yet some well chosen extensions could deliver a lot of
>         benefits with minimal effort.
>
>         And it does seem to fit well with the WG's scope.
>
>         - API functions for encoding and other [processing] of those
>         media streams
>         - API functions for decoding and [processing] (...) of those
>         streams at the incoming end
>
>
>     Not saying that we have the perfect solution, but we do have a set
>     of solutions already specified.....
>
>     The encoding is already spec'ed out in the recording API.
>     Decoding an encoded stream can be done by the Media Source API
>     (our test implementation of recording verified correctness by
>     feeding the resulting chunks to the Media Source API).
>
>     Getting from decoding back to a MediaStream can be done by using
>     the "Media Capture from DOM elements" specification.
>
>     So I'd say that we have all the pieces to make a loop, and would
>     encourage anyone who finds a place where the loop is broken to
>     file a bug on the relevant spec.
>
>
>         I'd like to put together a Use Cases document and would
>         welcome any input from other people interested.
>
>
>     I'd be happy to see such a document pulled together.
>     Note that we have a (rather old) "MediaStream Capture Scenarios"
>     document here:
>
>     http://w3c.github.io/mediacapture-scenarios/scenarios.html
>
>     This can be taken as input, background or however you want to
>     treat it.
>
>
>
>         roBman
>

Received on Monday, 20 April 2015 01:56:57 UTC