- From: Alexandre GOUAILLARD <agouaillard@gmail.com>
- Date: Thu, 16 Jan 2014 11:54:13 +0800
- To: Silvia Pfeiffer <silviapfeiffer1@gmail.com>
- Cc: Rob Manson <robman@mob-labs.com>, public-webrtc <public-webrtc@w3.org>
- Message-ID: <CAHgZEq4n3s-BFu2GdFjCAo+fxQH=Q=8xLw9zLV5AKGcK1B7y8A@mail.gmail.com>
Thanks justin for the hard work and the tentative deadlines, Something that I don't see in the list but i would drove to see happening is the capacity to read file and send them through a PeerConnection. video file => MediaStream => peerconnection audio file (or webAudio) => MediaStream => peerconnection I think Silvia did some experiment with web audio and it was not completely successful for received (remote) stream processing. Would it the possible to put it on the list as well? regards, alex. On Thu, Jan 16, 2014 at 10:20 AM, Silvia Pfeiffer <silviapfeiffer1@gmail.com > wrote: > Text streams are really a subclass of data streams. They could be sent > as WebVTT, but I think in live captioning situations you will not find > much time to do any text formatting. The use of WebVTT in live > situations is more likely to be useful for streaming where WebVTT is > provided either in-band or in chuncks via MSE (or HTTP adaptive > streaming approaches). > > In the case of video conferencing, captions would likely be > transmitted just via a data channel with individual letters or words > sent as soon as they are typed. WebVTT could come in for recording > here. I doubt it would be useful for transmission. > > HTH, > Silvia. > > > On Thu, Jan 16, 2014 at 11:19 AM, Rob Manson <robman@mob-labs.com> wrote: > > I definitely think there's a role for metadata tracks - and perhaps that > > relates more closely to VTT too (Silvia may like to comment on that). > > > > But I think that's a separate discussion - happy to have it added to the > > list though. > > > > roBman > > > > > > > > On 16/01/14 11:06 AM, piranna@gmail.com wrote: > >>> > >>> At the moment the only streams that can be used are the ones generated > by > >>> gUM. It would also be useful to be able to add post-processed streams > (or > >>> probably more accurately MediaStreamTracks e.g. use face tracking to > mask > >>> a > >>> persons face, or use object detection to highlight a specific object, > >>> etc.). > >>> At the moment the only way to send this data to the remote client is > via > >>> another channel (e.g. DC or WS)...and sync'ing is definitely an issue > >>> there. > >>> > >> Now that you say this: TextStreams and (uni-directional) DataStreams, > >> for example on-wire video subtitles or metadata (identifiers, > >> coordinates...) of the detected objects, so that info can be send > >> synced to the other end. Theoretically, they should be just subclasses > >> of MediaStreamTracks, and we have had already at my job problems > >> trying to sync that info... > >> > >> > >> > > > > > > -- Alex. Gouaillard, PhD, PhD, MBA ------------------------------------------------------------------------------------ CTO - Temasys Communications, S'pore / Mountain View President - CoSMo Software, Cambridge, MA ------------------------------------------------------------------------------------ sg.linkedin.com/agouaillard -
Received on Thursday, 16 January 2014 03:54:41 UTC