W3C home > Mailing lists > Public > public-audio@w3.org > October to December 2011

Re: TPAC F2F and Spec Proposals (was: Attendance for the AudioWG F2F meeting on Monday, 31 October)

From: Robert O'Callahan <robert@ocallahan.org>
Date: Fri, 14 Oct 2011 13:47:56 +1300
Message-ID: <CAOp6jLYyhv1HRqWD2sdQTBR8ZAcFSv=un+70x0wDgMRKSga07Q@mail.gmail.com>
To: Doug Schepers <schepers@w3.org>
Cc: Joe Berkovitz <joe@noteflight.com>, tmichel@w3.org, Philippe Le Hegaret <plh@w3.org>, Alistair MacDonals <al@pwn.io>, public-audio@w3.org, mgregan@mozilla.com
On Wed, Oct 12, 2011 at 5:51 AM, Doug Schepers <schepers@w3.org> wrote:

> I think it's crucial to have someone from Mozilla there as well, since that
> is where the chief disagreement with Google's approach lies.  I understand
> that neither Rob nor Matthew can attend... is there someone else from
> Mozilla who could represent this viewpoint at TPAC?
>

There isn't really anyone else who's involved enough.

I don't think this can be settled at a small F2F anyway.

In the meantime, it would be good to have more discussion on this list about
> the similarities, differences, and relative merits of the 2 proposals on the
> table:
>

Yes!

I'm especially interested in understanding the implementation status of the
> various proposals, and in hearing how well they work with the Web RTC spec.


I've implemented the core of my proposal. I'm still working on actually
playing generated audio, but I've got enough tests working to be confident
that the infrastructure hangs together. I've updated the spec proposal with
some changes based on the implementation issues I've encountered.

Our WebRTC implementation is not yet at the point where we can integrate it
with my work, but we'll be working on that soon-ish.

Also, are these specs incompatible, or are they just different facets of the
> general approach, and can they be integrated together in some way?
>

I believe the MediaStreams approach is more general. The
ProcessedMediaStreams proposal has explicit support for synchronization,
handling streams that block, streams with different sample rates, and
streams with multiple tracks and different kinds of tracks (including
video). It can be cleanly extended to handle other kinds of real-time media
tracks that need synchronization (e.g. Kinect-style depth buffers). It
should integrate seamlessly with other MediaStream producers and consumers,
without bridging.

The big thing it doesn't have is a library of native effects like the Web
Audio API has, although there is infrastructure for specifying named native
effects and attaching effect parameters to streams. I would love to combine
my proposal with the Web Audio effects.

Rob
-- 
"If we claim to be without sin, we deceive ourselves and the truth is not in
us. If we confess our sins, he is faithful and just and will forgive us our
sins and purify us from all unrighteousness. If we claim we have not sinned,
we make him out to be a liar and his word is not in us." [1 John 1:8-10]
Received on Friday, 14 October 2011 00:48:25 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 14 October 2011 00:48:25 GMT