Defining the split on WebRTC deliverables

We've briefly touched on this before [1] but I'd like for us to consider 
pursuing a clearer split between the Local Media API and the WebRTC API 
considering the intended use cases of each deliverable.

As a developer, if my intention is to obtain and work with the user's 
webcam and microphone on the client-side without any streaming, I should 
be able to find that documentation separate from anything like P2P 
sharing of that data.

It happens that we're in the process of building a web 'toolchain' 
comprised of a number of discrete interfaces that can act independently 
from each other. This is similar to some web toolchain concepts in use 
today: the same way that <video> data can be piped to <canvas> and piped 
back out to display in an <image> element while each of those elements 
can also act independently of each other.

It also so happens that both of these subjects require a significant 
amount of specification work, that implementation of these features may 
be on different timescales and that, therefore, they should be treated 
as separate deliverables that share a common timeline, with their own 
separate test suites to which implementations can then claim conformance 
without failing >50% of tests which are non-applicable to an 
implementation of one or the other.

In short: MediaStream implementation or testing or usage != 
PeerConnection implementation or testing or usage.

The proposal is to move the 'Stream API' section from the WebRTC spec 
[2] to the current getUserMedia specification [3].

The WebRTC specification should define 'Peer-to-peer connections' in all 
their complexity without having to also account for generic MediaStream 
definitions. If there are any additional Peer-to-peer related operations 
required on MediaStream objects, then the WebRTC specification should 
extend the MediaStream object as required and include that functionality 
specifically in the WebRTC spec.

As a subsequent, future task, the Media Capture Task Force may also 
evaluate how we could integrate the Streams API proposal from Microsoft 
[4] to enable additional local-media capture use cases. Streaming from 
all of this captured data is an additional tool in the web toolchain and 
we should try to split our development work as such.

Would there be any issues with moving 'Section 3: Stream API' from the 
WebRTC API spec to the Media Capture API spec? *

FWIW, if the DataStream interface [5] has any client-side usage without 
PeerConnection then it too should be treated as a separate deliverable IMO.

We should also try early on to converge [2], [4] and [5]. Anyone want to 
get the ball rolling on that in a separate thread?

br/ Rich

* I'm proposing that we rename of the 'getusermedia' document to 'Media 
Capture API' unless there are any objections.

[1] 
http://lists.w3.org/Archives/Public/public-media-capture/2011Nov/0006.html

[2] http://dev.w3.org/2011/webrtc/editor/webrtc.html#stream-api

[3] http://dev.w3.org/2011/webrtc/editor/getusermedia.html

[4] http://dvcs.w3.org/hg/webapps/raw-file/tip/StreamAPI/Overview.htm

[5] 
https://docs.google.com/document/pub?id=16csYCaHxIYP83DzCZJL7relQm2QNxT-qkay4-jLxoKA&pli=1

Received on Wednesday, 7 December 2011 12:38:54 UTC