- From: youennf via GitHub <sysbot+gh@w3.org>
- Date: Thu, 21 Jan 2021 09:12:10 +0000
- To: public-webrtc-logs@w3.org
A TransformStream can be implemented natively or in JS. TransformStreams have been used to implement native transforms like compression or text encoding/decoding. If it is done in JS, what will be exposed to JS is each stream chunk with a https://streams.spec.whatwg.org/#ts-default-controller-class to continue filling the pipe. If you expose a read and write stream, it is easy to use a transform, using something like read.pipeThrough(transform).pipeTo(write). The issue I am investigating here is whether we want to expose ReadableStream/WritableStream as a sort of replacement to MediaStreamTrack. For instance, with web audio, you can get the audio as a MediaStreamTrack, implement the transform as an AudioContext and continue manipulating the transformed data as a separate MediaStreamTrack object. Web audio also allows exporting the audio data, for instance compress it and send it to the network, without creating a transformed MediaStreamTrack object. I would derive some use-cases/requirements for video: 1. JS access to MediaStreamTrack video frames (ideally off the main thread) 2. Ability to create a MediaStreamTrack from video frames, transformed or created by JS 3. Ability to efficiently transform a MediaStreamTrack in another MediaStreamTrack, implemented by 1 and 2 or by its own construct. -- GitHub Notification of comment by youennf Please view or discuss this issue at https://github.com/w3c/mediacapture-insertable-streams/issues/4#issuecomment-764493148 using your GitHub account -- Sent via github-notify-ml as configured in https://github.com/w3c/github-notify-ml-config
Received on Thursday, 21 January 2021 09:12:13 UTC