RE: new draft of recording



From: rocallahan@gmail.com [mailto:rocallahan@gmail.com] On Behalf Of Robert O'Callahan
Sent: Monday, April 01, 2013 10:13 PM
To: Travis Leithead
Cc: Jim Barnett; public-media-capture@w3.org
Subject: Re: new draft of recording

On Tue, Apr 2, 2013 at 10:35 AM, Travis Leithead <travis.leithead@microsoft.com<mailto:travis.leithead@microsoft.com>> wrote:
OK. What does 'imageHeight' mean? I thought it was the image height the UA had selected after capturing has begun. Are you saying it's the height the author requested via setOptions? How is it supposed to be used by apps? Maybe we should just remove it?

>Ditto for the MIME type. An app can get that from the first Blob delivered, can't it?
These properties report what the recorder is currently configured to—which might be different than the options you provided—especially since the constraints you provide might only specify a particular range of values—you’ll eventually want to know exactly what the proportions are. For mimetype, it seems common enough to want to know in advance of starting recording, whether the format you want is supported.

It seems to me quite constraining for the implementation to have to figure out the imageWidth/height and MIME type synchronously, if there are external frameworks involved or the automatically-selected MIME type depends on track contents. If we had canRecordType as I suggested then it would be easy for the app to query whether a format is supported. So I suggest removing those attributes.


The utility of stop/start/pause/resume events seems questionable. These happen in response to method calls on the MediaRecorder, so authors already know when these things are happening. Are there use-cases where these events are useful?
>> The idea is that these are asynchronous commands, and the app needs to know when they actually go into effect.  This is particularly the case for start(), which can be called before any media is available.  I’m less sure of the case for pause/resume, but the events were added for consistency.  In general , stop/pause/resume should take effect quite quickly, but there can be a long delay before a call to start() produces any actual recording.
>It's not clear to me why an app would care when recording started (especially since the event fires async and can therefore be delayed a while after the actual start). Can you give an example where these events are needed?
Imagine a UI that wants to provide feedback based on when a stream actually starts recording, vs. when the users presses the record button. I’ve seen at least a few UIs that use an “armed” state to represent this interim period.

OK I guess. What about stop, pause and resume?
>> They’re part of the same use case that Travis described – letting the UI provide feedback about when recording is actually happening.
At one of our prior F2F meetings, one of the use cases for warnings was to alert the user if the inbound stream configuration changes in such a way that could still be supported by the recorder, but might be something the code would want to respond to. For example, new tracks being added/removed while recording.

That's covered by track add/remove events already specced on MediaStreams.
>>  It’s a more complicated case than that.  Whether a track can be added to a recording that’s underway depends on the track type and the mime type.   So the add/remove track events are not sufficient by themselves.

Rob
--
q“qIqfq qyqoquq qlqoqvqeq qtqhqoqsqeq qwqhqoq qlqoqvqeq qyqoquq,q qwqhqaqtq qcqrqeqdqiqtq qiqsq qtqhqaqtq qtqoq qyqoquq?q qEqvqeqnq qsqiqnqnqeqrqsq qlqoqvqeq qtqhqoqsqeq qwqhqoq qlqoqvqeq qtqhqeqmq.q qAqnqdq qiqfq qyqoquq qdqoq qgqoqoqdq qtqoq qtqhqoqsqeq qwqhqoq qaqrqeq qgqoqoqdq qtqoq qyqoquq,q qwqhqaqtq qcqrqeqdqiqtq qiqsq qtqhqaqtq qtqoq qyqoquq?q qEqvqeqnq qsqiqnqnqeqrqsq qdqoq qtqhqaqtq.q"

Received on Tuesday, 2 April 2013 11:44:20 UTC