- From: John Byrd <jbyrd@giganticsoftware.com>
- Date: Fri, 17 May 2013 11:17:56 -0700
- To: public-audio@w3.org
- Message-ID: <CAM5hBBuXq4W3-ysu24y0Qs5ZSN1JKCA-kNESyeqTNMpYeVTzRA@mail.gmail.com>
Greetings all, I figured it might be neighborly to introduce myself to the list. I'm John Byrd, and I run an audio middleware company out of Irvine, CA, USA. I've worked in the video game industry for almost 20 years now, and most of my focus has been on the audio side of video games. My company makes audio rendering engines for game consoles, tablets and other devices. In particular, our previous rendering engine, GiganticAudio 1.0, bears a lot of technical resemblance to WebAudio. I wrote this engine myself in 2006. I've browsed the source code in Blink and I see a lot of very familiar concepts. The resemblance (which I think is coincidental) is rather striking -- our engine also used the concepts of audio contexts, nodes, and interconnects between those nodes. So that brings me to one item of discussion, and I'd appreciate the team's input on this. Although I am new to the sources, I do not personally see any reason why the WebAudio architecture must by definition be tied to web browsers. Of course I see that the WebAudio objects are all accessible and manipulable via JavaScript DOM, and I also see that the webkit/blink implementation depends for some reason on the <wtf> headers, but I do not see any technical reason why this implementation might be forked and repurposed for native audio rendering applications. If anyone can think of a reason why this can't or shouldn't be done, I would love to hear it. In any case, I learned a lot from putting a fundamentally similar architecture into large-scale production on a number of video games, and I hope to be of service to the WebAudio team in the future. -- --- John Byrd Gigantic Software 2102 Business Center Drive Suite 210-D Irvine, CA 92612-1001 http://www.giganticsoftware.com T: (949) 892-3526 F: (206) 309-0850
Received on Friday, 17 May 2013 20:27:10 UTC