SV: Re: RTT implementation = Real-Time Text implementation

    




Gunnar Hellström  Omnitor gunnar.hellstrom@omnitor.se +46 708 20 42 88 


-------- Originalmeddelande --------
Från: Michael Tuexen <Michael.Tuexen@lurchi.franken.de> 
Datum: 2018-06-19  19:44  (GMT+01:00) 
Till: Lennart Grahl <lennart.grahl@gmail.com> 
Kopia: public-webrtc@w3.org, Gunnar Hellström <gunnar.hellstrom@omnitor.se> 
Rubrik: Re: RTT implementation = Real-Time Text implementation 

> On 19. Jun 2018, at 18:36, Lennart Grahl <lennart.grahl@gmail.com> wrote:
> 
> This may benefit from having more knobs to control SCTP-specific timers
> such as the retransmission interval. It might actually be the first use
> case I've encountered that could make some use out of `maxRetransmits:
> <some carefully chosen value above 0>` along with a specific
> retransmission interval.
Are there timing/performance requirements for RTT?
[GH] Yes.  the ones best related to human requirements are found in ITU-T F.700/F.703. It requires any character to be presented to the receiving party not more than one second after it was entered by the sending party. Also, no more than 0.5 percent characters may be lost or distorted in network conditions where voice transmission is just barely useable.
Tven various technical implementation standards and regulations have made variants from that base. As you see, the expressed requirements are not very tough, and it would be advisable to aim higher. One good addition in ETSI EN 301 549 is that if any character is lost, its place in the text stream needs to be marked.
The users do not like jerky presentation of RTT. Therefore, the RTP transport of RTT, RFC 4103, has a default packet transmission interval of 300 ms. That also allows to meet the latency requirement of 1s in most network conditions.
Yes, it has been discussed if it would be best to have a max retransmission setting used in the WebRTC implementation of RTT. But it might be that users would appreciate to use the relative reliability of the "reliable" transport of the data channel. That implies a risk for stalling transmission and break of connection in bad conditions, so it needs to be studied and decided what to do in such cases, and if it can be figured out if anything was lost.
/Gunnar.

Best regards
Michael
> 
> Cheers
> Lennart
> 
> 
> On 19.06.2018 11:27, Gunnar Hellström wrote:
>> This discussion started with RTT meaning "Real-Time Text" which is
>> time-sampled text used in conversational sessions, often together with
>> audio and video.
>> 
>> The time sampling is traditionally done in about 300 ms samples in order
>> to not cause a lot of load. So any new text entered within 300 ms is
>> transmitted in a chunk, regardless if the user has indicated any end of
>> message or not. This way of sending text implies a much better sense of
>> being connected between users in intensive conversations than the
>> transmission in completed messages does.
>> 
>> Today, when bandwidth and processing is less expensive, it could be
>> worth while decreasing the sample time, so that latencies close to what
>> is used for audio and video in conversational sessions is achieved.
>> 
>> It should be possible to use WebRTC data channel for Real-Time Text.
>> 
>> The synchronism requirements versus video and audio are mild. Users
>> barely notice an asynchronism of 500 ms or 1 second. Some applications,.
>> like speech-to-text transmit in complete words, and that is also allowed.
>> 
>> So, I do not know if the implementation needs to build on the
>> synchronized media use case. It may just be sufficient to use regular
>> WebRTC data channels with suitable characteristics. I like the
>> simplicity of the "reliable" transfer in data channels, but not the risk
>> for long delays in case of transmission problems.
>> 
>> Since the topic is mentioned in the initial functionality goals for Data
>> Channels, but not mentioned in RFC 7478, I suggest that it is included
>> in the NV discussions.
>> 
>> /Gunnar
>> 
>> 
>> 
>> Den 2018-06-19 kl. 10:07, skrev Harald Alvestrand:
>>> Existing RTT measurements:
>>> 
>>> https://w3c.github.io/webrtc-stats/webrtc-stats.html#dom-rtcremoteinboundrtpstreamstats-roundtriptime
>>> 
>>> 
>>> https://w3c.github.io/webrtc-stats/webrtc-stats.html#dom-rtcicecandidatepairstats-totalroundtriptime
>>> 
>>> 
>>> 
>>> On 06/19/2018 09:06 AM, Gunnar Hellström wrote:
>>>> Den 2018-06-19 kl. 08:46, skrev Bernard Aboba:
>>>> 
>>>>> In practice, the requirement for "synchronized data" can be supported
>>>>> by allowing applications to fill in the payload format defined in RFC
>>>>> 4103.
>>>>> 
>>>>> This enables RTT to be implemented in Javascript on top of an "RTP
>>>>> data channel" transport, utilizing the existing RTCDataChannel
>>>>> interface.
>>>>> 
>>>>> So in practice the need for RTT support can be included in a
>>>>> "synchronized data" requirement, if properly implemented.
>>>> Yes, it can be specified with current mechanisms, it is just a matter
>>>> of selecting some properties and values and getting it specified. A
>>>> standard is needed so that gateways and bridges can be developed
>>>> separately from user agents, and so that, as you say, it all gets
>>>> "properly implemented". So far, the latency requirements have been
>>>> slightly lower than for audio and video in conversational sessions,
>>>> when the user is typing the text, but now, with automatic speech to
>>>> text becoming useful, the requirement for short delays is becoming
>>>> more strict .
>>>> 
>>>> /Gunnar
>>>>> 
>>>>> ________________________________________
>>>>> From: Peter Thatcher [pthatcher@google.com]
>>>>> Sent: Monday, June 18, 2018 10:49 PM
>>>>> To: Gunnar Hellström
>>>>> Cc: public-webrtc@w3.org
>>>>> Subject: Re: WebRTC NV Use Cases
>>>>> 
>>>>> Thanks, I added that as a new requirement to the conferencing use case.
>>>>> 
>>>>> On Mon, Jun 18, 2018 at 11:18 PM Gunnar Hellström
>>>>> <gunnar.hellstrom@omnitor.se<mailto:gunnar.hellstrom@omnitor.se>>
>>>>> wrote:
>>>>> I suggest to include real-time text (= text transmitted in the same
>>>>> rate
>>>>> as it is created so that it can be used for real conversational
>>>>> purposes) in the NV work.
>>>>> 
>>>>> It is not included in RFC 7478, but included a U-C 5 in section 3.2 of
>>>>> https://tools.ietf.org/html/draft-ietf-rtcweb-data-channel-13<https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Ftools.ietf.org%2Fhtml%2Fdraft-ietf-rtcweb-data-channel-13&data=02%7C01%7CBernard.Aboba%40microsoft.com%7C4ecd480c191a456ac73d08d5d5a89c6f%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636649842519679581&sdata=fEZV7O6vIb1m3bi6mIBmi%2Bbf6PeJCtKx3Jb3WeFjWbA%3D&reserved=0>
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> It could possibly be done by continuing the work started in
>>>>> 
>>>>> https://datatracker.ietf.org/doc/draft-schwarz-mmusic-t140-usage-data-channel/<https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdatatracker.ietf.org%2Fdoc%2Fdraft-schwarz-mmusic-t140-usage-data-channel%2F&data=02%7C01%7CBernard.Aboba%40microsoft.com%7C4ecd480c191a456ac73d08d5d5a89c6f%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636649842519689589&sdata=KXNSeVQPxSLMa0%2FmzSQRio1W2p7Wgmn2oet%2FAoJTHjA%3D&reserved=0>
>>>>> 
>>>>> 
>>>>> 
>>>>> Use cases are e.g.
>>>>> 
>>>>> 1. conversational two-party sessions with video, audio and real-time
>>>>> text.
>>>>> 
>>>>> 2. conversational multi-party sessions with video, audio and
>>>>> real-time text.
>>>>> 
>>>>> 3. sessions with automatic speech - to - real-time text conversion in
>>>>> one or both directions.
>>>>> 
>>>>> 4. interworking WebRTC with audio, video and real-time text and legacy
>>>>> SIP with audio, video and real-time text.
>>>>> 
>>>>> /Gunnar
>>>>> 
>>>>> 
>>>>> Den 2018-05-09 kl. 21:29, skrev Bernard Aboba:
>>>>>> On June 19-20 the WebRTC WG will be holding a face-to-face meeting
>>>>>> in Stockholm, which will focus largely on WebRTC NV.
>>>>>> 
>>>>>> Early on in the discussion, we would like to have a discussion of
>>>>>> the use cases that WebRTC NV will address.
>>>>>> 
>>>>>> Since the IETF has already published RFC 7478, we are largely
>>>>>> interested in use cases that are either beyond those articulated in
>>>>>> RFC 7478, or use cases in the document that somehow can be done
>>>>>> better with WebRTC NV than they could with WebRTC 1.0.
>>>>>> 
>>>>>> As with any successful effort, we are looking for volunteers to
>>>>>> develop a presentation for the F2F, and perhaps even a document.
>>>>>> 
>>>> 
>> 
>> 
> 

Received on Wednesday, 20 June 2018 20:49:27 UTC