W3C home > Mailing lists > Public > public-webrtc@w3.org > July 2015

Re: Add "MediaStream with worker" for video processing into the new working items of WebRTC WG

From: Chia-Hung Tai <ctai@mozilla.com>
Date: Wed, 29 Jul 2015 09:49:30 +0800
Message-ID: <CACBucrHdny7xD2xP1da0TrXdhGR6-CfHDjwCT5_hcmxQz2XuLQ@mail.gmail.com>
To: Mathieu Hofman <Mathieu.Hofman@citrix.com>
Cc: "robert@ocallahan.org" <robert@ocallahan.org>, "public-media-capture@w3.org" <public-media-capture@w3.org>, "public-webrtc@w3.org" <public-webrtc@w3.org>
Hi, Mathieu,
As for backpressure, you can just use early return strategy to reach what
you want. See [1] for an example. In [2], we use a worker pool to reduce
the drop frame rate. We don't address too much frame drop mechanism in the
specification. The logic behind it is simple. There is no way to design a
drop frame mechanism which fulfilled all kind of web developer needs. So
the implementation in User Agent could be simple. And Web developer can
manage the flow control on their own like [1] and [2]. So I think the case
you mentioned could be resolved by skip the event until you get the acked
by the remote side.  Thanks!

[1]:
https://github.com/kakukogou/foxeye-demo/blob/master/Sample_Monitor/control_worker.js
[2]:
https://github.com/kakukogou/foxeye-demo/blob/master/Sample_Monitor_Multiple_ProcessorWorker/control_worker.js

BR,
CTai

2015-07-29 2:39 GMT+08:00 Mathieu Hofman <Mathieu.Hofman@citrix.com>:

>  There are some use case that require being able to send
> MediaStream(Track) to other contexts, such as multi window interfaces.
> Chia-Hung's proposals doesn't solve that use case.
> I understand the implementation might not be trivial in User Agents, but
> is there actually a fundamental reason this cannot be done? At the end of
> the day, MediaStream(Track) objects are simply light wrappers that are
> linked to an actual source. It shouldn't matter whether that source is
> captured / received by the user agent in a process attached to the current
> context, or another process. From what I gather, there has to be some
> existing cross process sharing done already, at least when 2 independent
> contexts in separate processes both need access to a device source,
> especially webcams which are often exclusive use devices.
>
>  Regarding backpressure, the only case this would work with Chia-Hung's
> proposal is if the processing of the frame is done synchronously in the
> JavaScript event handler. While such synchronous handling has less side
> effects in a worker than in the main context, it's still not very Web
> friendly, and might not be possible in some applications. Imagine the case
> where you want to only start processing the next frame once you know the
> current frame has been fully sent over a websocket and acked by the remote
> side. Probably not a fully realistic scenario, but a possible one
> none-the-less.
>
>  Mathieu
>
>  ------------------------------
> *From:* rocallahan@gmail.com [rocallahan@gmail.com] on behalf of Robert
> O'Callahan [robert@ocallahan.org]
> *Sent:* Tuesday, July 28, 2015 4:05 AM
> *To:* Mathieu Hofman
> *Cc:* public-media-capture@w3.org; public-webrtc@w3.org; Chia-Hung Tai
> *Subject:* Re: Add "MediaStream with worker" for video processing into
> the new working items of WebRTC WG
>
>   [Clearing most of the CC list]
>
>  I feel very strongly that we should not support transferring
> MediaStreams or MediaStreamTracks, especially across Workers. That would
> add great complexity to the Gecko MediaStream and WebAudio implementations.
> Martin already mentioned the multiprocess issue. Another issue is that
> proper memory management for WebAudio nodes and MediaStream(Tracks), e.g.
> to garbage-collect nodes, media elements or streams that are no longer
> relevant, is already quite difficult, and extending that across thread
> boundaries would be a nightmare. I much prefer alternatives like
> Chia-Hung's current proposal, which don't require that.
>
>  Note: as Martin also mentioned, supporting WebAudio or even MediaStreams
> in a worker is not itself much of a problem, if we want to do that. The key
> is that all connected nodes and streams should be associated with the same
> Worker or window.
>
>  As for backpressure, as I understand it, Chia-Hung's proposal does
> support a form of backpressure. If a video processing or monitoring
> callback is still processing frame N when frame N+1 becomes available,
> frame N+1 is queued until the callback completes, but the queue only has
> length 1, so if frame N+2 arrives while N is still being processed, frame
> N+1 is effectively dropped.
>
>  Rob
>  --
>  lbir ye,ea yer.tnietoehr  rdn rdsme,anea lurpr  edna e hnysnenh hhe
> uresyf toD
> selthor  stor  edna  siewaoeodm  or v sstvr  esBa  kbvted,t
> rdsme,aoreseoouoto
> o l euetiuruewFa  kbn e hnystoivateweh uresyf tulsa rehr  rdm  or rnea
> lurpr
> .a war hsrer holsa rodvted,t  nenh hneireseoouot.tniesiewaoeivatewt sstvr
> esn
>
Received on Wednesday, 29 July 2015 01:49:58 UTC

This archive was generated by hypermail 2.3.1 : Monday, 23 October 2017 15:19:45 UTC