W3C home > Mailing lists > Public > whatwg@whatwg.org > March 2011

[whatwg] Peer-to-peer communication, video conferencing, and related topics (2)

From: Robert O'Callahan <robert@ocallahan.org>
Date: Tue, 29 Mar 2011 18:17:47 +1300
Message-ID: <AANLkTinvqizeUNjUd_C_eW34NhSCX3Nkcf1QQqYe1bB+@mail.gmail.com>
Ian Hickson wrote:

> I agree that (on the long term) we should support stream filters on
> streams, but I'm not sure I understand <video>'s role in this. Wouldn't it
> be more efficient to have something that takes a Stream on one side and
> outputs a Stream on the other, possibly running some native code or JS in
> the middle?

We could.

I'm trying to figure out how this is going to fit in with audio APIs. Chris
Rogers from Google is proposing a graph-based audio API to the W3C Audio
incubator group which would overlap considerably with a Stream processing
API like you're suggesting (although in his proposal processing nodes, not
streams, are first-class).

A fundamental problem here is that HTML media elements have the
functionality of both sources and sinks. You want to see <video> and <audio>
only as sinks which accept streams. But in that case, if we create an audio
processing API using Streams, we'll need a way to download stream data for
processing that doesn't use <audio> and <video>, which means we'll need to
replicate <src> elements, the type attribute, networkstates, readystates,
possibly the 'loop' attribute... should we introduce a new object or element
that provides those APIs? How much can be shared with <video> and <audio>?
Should we be trying to share? (In Chris Rogers' proposal, <audio> elements
are used as sources, not sinks.)

Received on Monday, 28 March 2011 22:17:47 UTC

This archive was generated by hypermail 2.4.0 : Wednesday, 22 January 2020 16:59:31 UTC