One requirement for many use cases is that it support inserting process into the media pipeline. This leads to requirements for pretty hard realtime processing (20 ms being a long time), and thus to having to make APIs that make sense in workers or separate threads (like AudioProcessingWorklet). Some use cases don't require that - they're perfectly workable for stuff that only needs listening in on the media. For instance: - Echo detectors used to create a "Fix your configuration" message rather than trying to remove the echo - Speech recognizers that associate certain voices with certain persons - may be used to show the name of the person speaking; this won't need to be realtime, having the name show up 0.5 seconds after the person starts speaking isn't a big deal - Video quality monitors that look at encoded frames to figure out what QP values were used, and displays a "low quality because sender thinks you have low bandwidth" type of message I think these use cases can be significant, and can be unlocked with APIs that don't demand as much in new spec & implementation than the more ambitious "in-pipeline" use cases. Harald, typing until the "aha" moment goes awayReceived on Tuesday, 22 May 2018 13:39:12 UTC
This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:18:41 UTC