Re: What would you like to see in WebRTC next? A low-level API?

On 2/1/2018 2:22 AM, Sergio Garcia Murillo wrote:
> If you can send a metadata payload within rtp, you can sync with 
> external sources. Jus' sayin' ;)
>
> For example, if you create a json object with an uuid and send it via 
> datachannel, you could send that uuid in the rtp metadata.
>
> Then on the receiving side you could get an  onmetadata event on the 
> media stream track, retrieve the uuid sent by rtp and sync it with the 
> data sent via datachannel.
>
> If the json is small, you could even send the full json on the rtp 
> metadata and not even send it via datachannels.

If you're trying to add more channels of sensory information, to be more 
explicit than Martin was, you should define an RTP payload for the data 
- then it will be delivered in the same manner, and synced 
appropriately.  RTP's purpose in life is (mostly) delivery of media like 
this.  It also will create a standard way to stream the data in realtime 
that can be used in other contexts.

If this is (mostly) research, If the bandwidth required is <<64Kbps, you 
could feed it in as carefully-crafted "audio" and let G.711 mangle it, 
then demangle.  Note that loss concealment in NetEq/etc would mess with 
your data, so some type of checksum/etc would be needed.  You could even 
use a G.711 audio track as a timing track, and encode in the audio 
timestamps (pulses, whathaveyou) that correspond to timing data you put 
in datachannels.

You could include in datachannels data NTP timestamps, and you could ask 
for a way to read out the remote-NTP-time of data on a Track, and sync 
by hand.  If you don't need perfect sync when data hiccups, that's 
fine.  If you need perfect sync when RTP stalls or if the jitter buffer 
decides to grow or shorten the buffer length/delay, then you really need 
to be relying on RTP itself for sync, or you'd need some way to create 
locally-synced-tracks from the datachannel data somehow -- you can't 
realistically get events notifying you of every tweak to delay the 
jitter buffer does, which may well be every 10ms, and react to them in 
JS in sufficient time reliably.

Note that datachannels are generally best-effort; though generally 
they'll fare better than video does if there's a bandwidth restriction 
(which is probably good from your point of view).  Also: no one has 
implemented DataChannel priorities (or use of ndata) yet.

    Randell Jesup

>
> Best regards
> Sergio
>
>
> On 01/02/2018 0:04, Martin Thomson wrote:
>> RTP synchronizes well with RTP.  Jus' sayin'.
>>
>> On Thu, Feb 1, 2018 at 9:25 AM, Sergio Garcia Murillo
>> <sergio.garcia.murillo@gmail.com> wrote:
>>> +1
>>>
>>> Although I would prefer an option to attach a small metadata with a 
>>> video
>>> frame at rtp level which would trigger an event on the corresponding
>>> mediastreamtrack.
>>>
>>> That metadata could either be the end data needed by the app, or a 
>>> key for
>>> synchronizing the data send/received via datachannel.
>>>
>>> Best regards
>>> Sergio
>>>
>>>
>>> On 31/01/2018 1:12, Toshiya Nakakura wrote:
>>>> Hi All,
>>>>
>>>> I'm sorry I might making a new thread because I haven't received the
>>>> original mail.
>>>> I'm a Ph.D student researching immersive style Telepresence.
>>>> I think that you can understand what I research roughly with this 
>>>> video.
>>>> <https://www.youtube.com/watch?v=KoC1iTOmYTg&t=38s>
>>>>
>>>> Recently, I have been trying to implement this system with Web 
>>>> Technology.
>>>> I started with WebRTC and WebVR, but I faced several problems.
>>>> I think that sharing about this edge case and issue*s* will 
>>>> contribute to
>>>> the discussion of WebRTC Next.
>>>> This mail also include the part which should be discussed in IETF.
>>>>
>>>> The major one is synchronization between five sense information.
>>>> Current WebRTC MediaStream supports only for video and audio,
>>>> so users have to send tactile information over DataChannel.
>>>> MediaStream synchronize its tracks,
>>>> but there isn't any option to synchronize between MediaStream and
>>>> DataChannel.
>>>> Then I cannot implement the synchronization between vision and 
>>>> tactile.
>>>> It makes very weird sensation when the robot grabs something.
>>>>
>>>> I hope some API to support this edge use-case like following plans.
>>>>
>>>> Plan A. MediaStream supporting another kind of five sense information
>>>> Plan B. An API to synchronize data and media
>>>> Plan C. An API for observing the event of MediaStream, such as 
>>>> retrieving
>>>> video-image from ring buffer, to obtain the information necessary 
>>>> for the
>>>> user to implement the synchronization engine
>>>>
>>>> I would appreciate if you have an interest in my use-case.
>>>>
>>>> Regards,
>>>> Toshiya Nakakura
>>>>
>>>>
>>>>
>>>
>
>

Received on Thursday, 1 February 2018 19:07:29 UTC