- From: Martin Thomson <martin.thomson@gmail.com>
- Date: Wed, 28 Aug 2013 13:37:33 -0700
- To: "Mandyam, Giridhar" <mandyam@quicinc.com>
- Cc: Harald Alvestrand <harald@alvestrand.no>, "public-media-capture@w3.org" <public-media-capture@w3.org>
On 28 August 2013 13:28, Mandyam, Giridhar <mandyam@quicinc.com> wrote: > I think it depends on what you consider “lossless”, or at least sufficiently > high quality for RT speech recognition (which was the main use case for > time-slicing on the record method). Harald's point was that a real-time application will observe the delays resulting from packet loss and adjust what it sends accordingly. A straight media recorder might just let the buffers back up a little more. One optimizes for latency, the other doesn't. It gets more interesting when you get a routing flap or some other less-time-constrained problem. That's when you might see a real-time implementation crank the send rate to zero. Again, the media recorder is going to buffer until it runs out of space. Both scenarios are probably completely workable for an application that doesn't regard latency as paramount, but they will have different characteristics that - in some cases - will surface. Not all abstractions are equal, unfortunately.
Received on Wednesday, 28 August 2013 20:38:00 UTC