Re: The Image Stream Processing pipeline/s

On 09/05/2013 05:51 PM, Rob Manson wrote:
> So is the recommendation that we should use the Media Recording Blob 
> based API?
>
>   stream -> media recorder -> blob -> file reader -> array buffer
>
> As compared to the current de-facto pipeline?
>
>   stream -> <video> -> <canvas> -> image data -> array buffer
>
> And Image Capture?  Giri has just published the latest version of that.

Actuallly I suspect that for local processing, the "current de-facto 
pipeline" will work better.
Reason being: The steps in the "old" pipeline are:

  - video capture
  - media transformation (probably yuv or similar "lightweight" transforms)
  - rescaling to canvas
  - image data fetching (again using yuv format)
  - whatever analysis your JS is doing

In the "new" pipeline, it is

  - video capture
  - media transformation
  - rescaling to selected resolution
  - encoding, probably using a compressing codec like VP8
  - putting the data into a wrapper format like Matroska
  - putting data into the buffer
  - image data fetching (which is now encoded with a codec)
  - unwrapping and discarding the wrapper data
  - decoding of the data to a format you can use
  - whatever analysis your JS is doing

That's likely to be a heavier burden.

The chief usage of recorder is if you want to record all the frames. 
Some analyses (speech recognition) need that, other analyses (face 
detection) can get by with the occasional frame - but they're heavily 
dependent on the quality of that frame. I think the image capture API is 
tuned towards that kind of scenario.

But I speak from a limited viewpoint - without knowing what comes after 
the array buffer, I think it's unwise to give advice on what should be 
in front of it.

       Harald

Received on Thursday, 5 September 2013 16:11:35 UTC