Re: What would you like to see in WebRTC next? A low-level API?

I'm sorry, but I'm not sure about the timecodes.

Best Regards,
Toshiya Nakakura

On 2018年02月02日 17:01, Harald Alvestrand wrote:
> Every time I talk to real video people about the need for 
> synchronization, they say "SMTPE timecodes".
> We've largely ignored those in WebRTC, but do they have a place?
>
> On 02/02/2018 02:00 AM, Toshiya Nakakura wrote:
>> Thank you for this fruitful discussion.
>>
>> Yes, JavaScript programs cannot handle RTP-binary currently.
>>
>> In the case which user handle data generated in JavaScript on 
>> Browser, the metadata plan would works perfectly.
>> In my case, we need to consider getUserMedia API or another API which 
>> offer tactile data to JavaScript programs.
>>
>> I am sure you already know about this but video follows this flow.
>>
>> (A-1)get raw data -> (A-2)encode -> (A-3)generate RTP packets -> 
>> (A-4)transmit(on MediaStream) -> (A-5)extract payload -> (A-6)decode 
>> -> (A-7)buffering -> (A-8)play
>>
>> Also, tactile data follows this flow.
>>
>> (B-1)get raw data -> (B-2)encode -> (B-3)packetize -> 
>> (B-4)transmit(on DataChannel) -> (B-5)de-packetize -> (B-6)decode -> 
>> (B-7)buffering -> (B-8)play
>>
>> In the metadata plan, (B-1)processes in JavaScript programs need to 
>> set metadata to (A-1)process in Browser, and (B-7)processes also need 
>> to get metadata from (A-7)process.
>> Then browsers also need to offer methods and event triggers for 
>> (B-1)and(B-7).
>> Ideally, it is the best if Browsers have tactile encoders and 
>> decoders like video in the future.
>> Currently, my program gets tactile data through serial device API for 
>> Chrome App.
>>
>> As for the getUserMedia, I have one more problem.
>> In my understanding, one getUserMedia API call grabs only one camera.
>> To make a 3D vision, videos from two eyes must be synchronized perfectly.
>> But the synchronization between medias from another getUserMedia API 
>> calls isn't supported.
>>
>> Best Regards,
>> Toshiya Nakakura
>>
>> On 2018年02月01日 20:41, Sergio Garcia Murillo wrote:
>>> AFIK the problem is that the rtp timestamp is not know until after 
>>> the video frame is encoded very deep on the video processing 
>>> pipeline, so it is not possible to retrieve it on js at sending time.
>>>
>>> Best regards
>>> Sergio
>>>
>>> On 01/02/2018 11:50, Lennart Grahl wrote:
>>>> I really am not an expert when it comes to A/V but there's a timestamp
>>>> field in the RTP... if the API would allow to retrieve that value
>>>> (.currentTimestamp) when sending and wait for it when receiving
>>>> (.waitForTimestamp(timestamp))... shouldn't that work?
>>>>
>>>> Cheers
>>>> Lennart
>>>>
>>>>
>>>> On 01.02.2018 08:22, Sergio Garcia Murillo wrote:
>>>>> If you can send a metadata payload within rtp, you can sync with
>>>>> external sources. Jus' sayin' ;)
>>>>>
>>>>> For example, if you create a json object with an uuid and send it via
>>>>> datachannel, you could send that uuid in the rtp metadata.
>>>>>
>>>>> Then on the receiving side you could get an  onmetadata event on the
>>>>> media stream track, retrieve the uuid sent by rtp and sync it with 
>>>>> the
>>>>> data sent via datachannel.
>>>>>
>>>>> If the json is small, you could even send the full json on the rtp
>>>>> metadata and not even send it via datachannels.
>>>>>
>>>>> Best regards
>>>>> Sergio
>>
>
> -- 
> Surveillance is pervasive. Go Dark.

Received on Monday, 5 February 2018 01:34:23 UTC