W3C home > Mailing lists > Public > public-media-capture@w3.org > July 2013

Re: Use-case: Auditing

From: cowwoc <cowwoc@bbs.darktech.org>
Date: Thu, 18 Jul 2013 15:19:18 -0400
Message-ID: <51E83FB6.3020809@bbs.darktech.org>
To: public-media-capture@w3.org
On 18/07/2013 3:45 AM, Harald Alvestrand wrote:
> On 07/17/2013 07:32 PM, cowwoc wrote:
>> Harald,
>> On 17/07/2013 7:28 AM, Harald Alvestrand wrote:
>>> Thus, if your automated peer implements the protocols but not the 
>>> APIs, it can do anything it wants with the incoming packets.
>>> Were you looking for a browser-based recorder or for a 
>>> non-browser-based recorder?
>>     I understand, but as mentioned in the previous thread I believe 
>> there is a strong demand for headless (server) peers. I don't think 
>> it is realistic (or beneficial) for the specification to ask every 
>> server vendor to start parsing the signaling layer. By exposing 
>> Object APIs for these use-cases we enable future specifications to 
>> modify implementation details without applications out in the wild.
>>     We need to differentiate between Implementers and Application 
>> Developers. The latter should never have to interact with 
>> implementation details because then future changes will break their 
>> applications.
> I think we agree on where we want to be at, but it feels like we're 
> talking past each other.
> Are you talking about a headless entity that implements the WebRTC 
> Javascript API (and presumably enough of other HTML specifications to 
> run applications served by webservers, as if it was a browser?
> That's what I was calling a "browser-based recorder" up above.

     Ideally, I shouldn't have to run a browser at all. Ideally, the 
specification should publish two APIs: JS and C/C++ (aka Native API) 
against the same use-cases. 
http://www.webrtc.org/webrtc-native-code-package mentions the latter, 
the specification completely ignores its existence.

     From a practical point of view, I haven't heard of a way to embed a 
headless browser into a web server. As you can see from 
http://stackoverflow.com/q/16429862/14731 I am not the only one.

     Even if that existed, I would suspect it scales a lot worse than 
integrating against the Native API. Scalability was a nightmare in the 
Flash/H264 world. If we do a better job in this space we could capture a 
lot of minds and hearts.

> Non-browser-based devices have to parse signalling on their own. They 
> don't have Javascript APIs

     I understand that is your position, but we're not playing the role 
of Integrators here. We are not bridging WebRTC with other technologies. 
We are doing the exact same thing as a peer running inside a browser, 
except that we're using C/C++. This is true both for headless servers 
(which do not act as a gateway) and mobile devices (which favor the use 
of native applications).

     In the space of Browser Vendors, Technology Integrators, 
Application Developers, I argue that this falls into the last category 
and as such should be covered by the WebRTC specification. Even if you 
forget about headless servers for now, there is a *huge* demand for 
native mobile clients.

     Are you reluctant to agree because of the amount of work involved? 
Or do you disagree that this is a kind of Application Development?

Received on Thursday, 18 July 2013 19:20:03 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 16:26:18 UTC