W3C home > Mailing lists > Public > public-webrtc@w3.org > June 2015

Re: What happened to onaddstream?

From: Silvia Pfeiffer <silviapfeiffer1@gmail.com>
Date: Tue, 23 Jun 2015 20:05:28 +1000
Message-ID: <CAHp8n2=75WQ4+VkN+PwR81DAOuQC-yHmeePYDKs9yeDv4nVv7g@mail.gmail.com>
To: Stefan HÃ¥kansson LK <stefan.lk.hakansson@ericsson.com>
Cc: Harald Alvestrand <harald@alvestrand.no>, Martin Thomson <martin.thomson@gmail.com>, "public-webrtc@w3.org" <public-webrtc@w3.org>, Peter Thatcher <pthatcher@google.com>
On Tue, Jun 23, 2015 at 7:48 PM, Stefan HÃ¥kansson LK
<stefan.lk.hakansson@ericsson.com> wrote:
> On 22/06/15 21:38, Harald Alvestrand wrote:
>> On 06/22/2015 06:20 PM, Martin Thomson wrote:
>>> It's gone from the spec.  Everyone implements it.  It would seem to be
>>> necessary to document it, even if it is only under a "legacy" heading.
>>>
>> It went away when addStream() went away.
>>
>> Instead we have a "track" event that carries an RTCTrackEventInit,
>> containing a receiver, a track and an array of streams (section 5.4).
>>
>> It might be nice to have a separate "stream" event that fires only when
>> a new stream has been created; I don't think it's trivial to shim the
>> "stream" event based on only the information in "track" events.
>
> I agree that it is not trivial for the more advanced use cases. But for
> the simple ones that use only one MediaStream (with one audio and one
> video track) throughout the session it seems trivial. My question would
> be if there are many apps being more advanced around that motivate this
> addition when we're moving to a more track-based approach.
>

We have several applications that run 2 or 3 video tracks over one
connection. We particularly use it to get a face camera, a document
camera and a room overview camera together in one stream.

Here's me hoping we're not the only ones with such needs!

Cheers,
Silvia.
Received on Tuesday, 23 June 2015 10:06:16 UTC

This archive was generated by hypermail 2.3.1 : Monday, 23 October 2017 15:19:44 UTC