- From: Chris Rogers <crogers@google.com>
- Date: Fri, 11 Feb 2011 13:33:07 -0800
- To: Silvia Pfeiffer <silviapfeiffer1@gmail.com>
- Cc: Doug Schepers <schepers@w3.org>, public-xg-audio@w3.org, Philippe Le Hegaret <plh@w3.org>, Michael Cooper <cooper@w3.org>
- Message-ID: <AANLkTin0HA+xx5hbgPQrZqEf4Gsr+Fk2wvV3njBE60h0@mail.gmail.com>
Hi Silvia, I think it's pretty clear in the text that the idea is to get access to the audio stream: "It will also add programmatic access to the PCM audio stream for low-level manipulation directly in script" And we can clarify that it includes access to the <audio> PCM stream. Calling it "reading" and "writing" is just not the type of terminology I've seen very much in common use in academic articles or by musicians. For example, when talking about audio plugins such as VST and Audio Units typically the words "processing" and "synthesis" are used. That said, it's not really that big of a deal. I also included a couple of other changes which I hope people will consider concerning some applications for the API. (I'm sorry, it looks like my edits didn't actually show up in red). Chris On Fri, Feb 11, 2011 at 1:08 PM, Silvia Pfeiffer <silviapfeiffer1@gmail.com>wrote: > On Sat, Feb 12, 2011 at 7:42 AM, Chris Rogers <crogers@google.com> wrote: > > Hi Doug, > > I'm sorry for taking so long to reply, but I have a few comments about > the > > draft you've created. First of all, I want to thank you for the time and > > effort you've put into this so far. I'm really excited to see this > moving > > forward! I think what you've written is really good, but I wanted to > offer > > some suggestions on how I think the text could be improved. I've > included > > my proposed changes in red below. Most of the changes are really just > > relating to terminology which I believe is in more common use. For > example: > > * using the words processing and synthesis instead of reading and writing > > Reading and writing audio data (or streams) has to do with getting > access to the data encapsulated in a <audio> element both for > extraction and for creation. > Processing is about taking such extracted data and changing it, and > synthesis is about creating such audio data. > > These are all very different goals for a charter and we need to be > aware of this. I don't think we should remove reading and writing. I > certainly want to see all four goals achieved as an outcome of this > working group. > > What I am certainly missing in this charter is the mention of the > existing <audio> element in HTML5 and that the work that this group > performs has to integrate with this element. > > Best Regards, > Silvia. > > > > * using PCM audio stream instead of raw audio data > > I hope you will consider my suggestions and look forward to seeing these > > audio features move towards standardization. > > Cheers, > > Chris > > > > > --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- > > > > Audio Working Group Charter > > > > DRAFT: for review only. > > > > The mission of the Audio Working Group, part of the Rich Web Client > > Activity, is to define a client-side script API adding more advanced > audio > > capabilities than are currently offered by <audio>. The API will > support > > the features required by advanced interactive applications including the > > ability to process and synthesize audio streams directly in script. > > > > The HTML5 specification introduces the <audio> and <video> media > elements, > > including an API to play back prerecorded audio and video files and to > get > > limited information about the media, such as duration. The Audio Working > > Group will build upon and expand that basic functionality. > > > > Scope > > > > The audio API will provide methods to create sounds, and perform > client-side > > audio processing and synthesis with minimal latency. It will also add > > programmatic access to the PCM audio stream for low-level manipulation > > directly in script. This API can be used for interactive applications, > > games, 3D environments, musical applications, educational applications, > and > > for the purposes of accessibility. It includes the ability to > synchronize, > > visualize, or enhance sound information when used in conjunction with > > graphics APIs. Sound synthesis can be used to enhance user interfaces, or > > produce music. The addition of advanced audio capabilities to user agents > > will present new options to Web developers and designers, and has many > > accessibility opportunities and challenges that this working group will > keep > > in mind. > > > > Two existing experimental audio APIs are currently being developed in > > different browsers. The Mozilla Firefox browser provides simple > read-write > > access to the audio stream, relying on script to perform real-time audio > > algorithms; the WebKit implementation in Apple Safari and Google Chrome > > provides an additional higher-level graph-based API, which performs some > > common functions in the native browser implementation. This charter does > not > > dictate which approach the Audio Working Group will follow. > > > > This working group is a result of deliberation by the W3C Audio Incubator > > Group which preceded it, and will address the use cases and requirements > > developed by that incubator group, which are currently under final > > development. > > > > The scope of this working group includes: > > > > Developing a client-side script API for processing and synthesizing PCM > > audio streams directly in script. > > Access to audio device inputs, such as for microphones or other audio > > inputs, and multi-channel speakers or other audio outputs > > > > This working group will take into account common work-flows for sound > > creators, including considerations for common audio formats. This group > will > > also liaise with other groups for direct connection to audio inputs, such > as > > microphones. > > > > This working group is expected to collaborate with other groups, such as > the > > HTML Working Group, Device APIs and Policy Working Group, Web Real-Time > > Communications Working Group, or their successors, to define an API for > > accessing system devices such as microphones, speakers, and audio > processors > > and channels. If work does not proceed elsewhere in a timely fashion, > this > > group may define an API for audio device access. > > > > Success Criteria > > > > In order to advance beyond Candidate Recommendation, each specification > is > > expected to have at least two independent implementations of each of > feature > > defined in the specification. > > > > On Mon, Nov 29, 2010 at 12:19 AM, Doug Schepers <schepers@w3.org> wrote: > >> > >> Hi, folks- > >> > >> Here is my rough first pass at a charter for the proposed Audio WG. > Please > >> review it, let me know what I should add or take out or fix. This is > public, > >> so feel free to share it around. > >> > >> http://www.w3.org/2010/12/audio-wg-charter.html > >> > >> I drew liberally from the Audio XG charter, and some of it may not be as > >> appropriate for the Audio WG, but I thought much of it was still > relevant. > >> > >> I would really like this incubator group to help produce a report on > some > >> use cases and requirements, to help clarify our goals in designing an > audio > >> API, if possible. > >> > >> (Note that for this initial charter period, the proposed Audio WG would > >> deliver only an audio API, not any of the other things that might also > >> useful later, such as music markup.) > >> > >> Regards- > >> -Doug Schepers > >> W3C Team Contact, SVG, WebApps, and Web Events WGs > >> > > > > >
Received on Friday, 11 February 2011 21:33:41 UTC