W3C home > Mailing lists > Public > public-webrtc@w3.org > November 2012

Re: [rtcweb] Text communication in RTCWEB sessions -job overview

From: Randell Jesup <randell-ietf@jesup.org>
Date: Mon, 19 Nov 2012 08:10:30 -0500
Message-ID: <50AA2FC6.6090409@jesup.org>
To: rtcweb@ietf.org
CC: "public-webrtc@w3.org" <public-webrtc@w3.org>
On 11/18/2012 2:55 PM, Gunnar Hellström wrote:
> After a rapid browse, this is my view of what needs to be done to 
> specify real-time text in RTCWEB and WebRTC.
>

> 9. w3c.webrtc
>     add rtt API, e.g. to Network Stream API
>     important, urgent
>
> 10. w3c getusermedia
>     add rtt to GetUserMedia API
>     important, urgent

W3C-ish stuff:
This one would be complex, and probably shouldn't be in getUserMedia 
(though it could).  A more equivalent API would be mediastream = 
video_element.captureStreamUntilEnded() - create a MediaStream from a 
video (or audio) element; you could do something like that.

To implement this, there needs to be a DOM element to capture it via.  
The main options would be an <input> element or a media element.  But 
the are more issues, especially with a media element as a source, and 
serious UI issues if specified in them.

A better question would be "What's the minimal (and most generic) API 
that covers the use cases, and do these abilites already exist?"

I'll make an assertion: existing facilities for access to keystroke data 
(and certainly for text display) are sufficient, and the only API 
*needed* (and which covers all sorts of other use cases) would be the 
ability to insert/receive text data from a MediaStream from JS.  E.g 
textstream = <whatever>; textstream.insert(key) (or keys).  And the same 
in the other direction: textstream.ondata(function(keys, time) { do 
whatever with the keys });  I'd assert for text in a MediaStream that's 
a far more natural JS API - and so happens to also be very close to the 
API of a simple keystream over WebSocket or DataChannel.

Another thing to note is that WebSockets and DataChannels are virtually 
identical (on purpose), so any work to standardize text chat over 
DataChannels would apply equally well to chat over WebSockets, which 
would have lots of real-world applications.

This assumes that a MediaStream is the correct transfer mechanism (and 
correct API at the W3 level), which is definitely something I'd agree to 
at this point.  I also wouldn't rule it out (I see some arguments in 
favor, as well as against) - but this needs to occur in 
public-webrtc@w3, and would do so before going too far down the path 
here assuming a MediaStream is the mechanism.

For this part of the conversation, I strongly suggest it occur on 
public-webrtc, so reply there please.

-- 
Randell Jesup
randell-ietf@jesup.org
Received on Monday, 19 November 2012 13:11:13 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Monday, 19 November 2012 13:11:13 GMT