- From: <piranna@gmail.com>
- Date: Fri, 5 Jul 2013 23:16:33 +0200
- To: Martin Steinmann <martin@ezuce.com>
- Cc: tim panton <thp@westhawk.co.uk>, Martin Thomson <martin.thomson@gmail.com>, Parthasarathi R <partha@parthasarathi.co.in>, cowwoc <cowwoc@bbs.darktech.org>, Christer Holmberg <christer.holmberg@ericsson.com>, Iñaki Baz Castillo <ibc@aliax.net>, Robin Raymond <robin@hookflash.com>, Roman Shpount <roman@telurix.com>, Adam Bergkvist <adam.bergkvist@ericsson.com>, Ted Hardie <ted.ietf@gmail.com>, "public-webrtc_w3.org" <public-webrtc@w3.org>, Eric Rescorla <ekr@rtfm.com>
> The primary application is voice and video at least in my book I've always find this the most annoying point of WebRTC. Why so much focus on audio & video relegating DataChannels to a second place (almost a year to start having a specification and some implementations!). Would it be easier and simpler to implement the audio & video support directly over the DataChannels, maybe requiring them to be not reliable? Also, developing the API from this point of view it would be a really simple one. I think that focusing so much on audio & video and on media in general it's the reason the API is so much oriented to SDP and why people is so reluctant to develop a high level API. -- "Si quieres viajar alrededor del mundo y ser invitado a hablar en un monton de sitios diferentes, simplemente escribe un sistema operativo Unix." – Linus Tordvals, creador del sistema operativo Linux
Received on Friday, 5 July 2013 21:17:21 UTC