RE: new draft of recording

The text for imageHeight/imageWidth/mimeType says "the initial value will be a platform-supplied default". It seems better for the initial value to be defined, e.g. 0 and the empty string. When and how often do they change to a different value? The spec needs to say.
>> Don’t we want the initial value to be something useable?  The idea is that a developer can just call record and something useful will happen.  That means that height/width and MimeType have to see sent to something practical, and that’s best left up to the UA (particularly since we don’t have a MTI mimetype.)

>OK. What does 'imageHeight' mean? I thought it was the image height the UA had selected after capturing has begun. Are you saying it's the height the author requested via setOptions? How is it supposed to be used by apps? Maybe we should just remove it?

>Ditto for the MIME type. An app can get that from the first Blob delivered, can't it?
These properties report what the recorder is currently configured to—which might be different than the options you provided—especially since the constraints you provide might only specify a particular range of values—you’ll eventually want to know exactly what the proportions are. For mimetype, it seems common enough to want to know in advance of starting recording, whether the format you want is supported.

The utility of stop/start/pause/resume events seems questionable. These happen in response to method calls on the MediaRecorder, so authors already know when these things are happening. Are there use-cases where these events are useful?
>> The idea is that these are asynchronous commands, and the app needs to know when they actually go into effect.  This is particularly the case for start(), which can be called before any media is available.  I’m less sure of the case for pause/resume, but the events were added for consistency.  In general , stop/pause/resume should take effect quite quickly, but there can be a long delay before a call to start() produces any actual recording.
>It's not clear to me why an app would care when recording started (especially since the event fires async and can therefore be delayed a while after the actual start). Can you give an example where these events are needed?
Imagine a UI that wants to provide feedback based on when a stream actually starts recording, vs. when the users presses the record button. I’ve seen at least a few UIs that use an “armed” state to represent this interim period.
Separate error and warning events seems unnecessary. Other Web APIs don't use warning events. Web developers generally receive warnings via some UA-specific developer tool interface. If an actual Web app can't use the information, the API shouldn't be there. And generally, if you're not sure an API is needed, it shouldn't be there.
>> Yes, it’s not clear whether we need warning events.  One possible use would be in out-of-memory cases.  Rather than waiting till it runs out of memory, the UA might signal the to the app that it was about to run out of memory.  That way it could be sure not to lose any current data.  But we haven’t worked this out in detail yet.

>I don't think we should try to solve OOM here. I don't think this should be in the spec unless someone has a detailed explanation of how it works and why it's needed.
At one of our prior F2F meetings, one of the use cases for warnings was to alert the user if the inbound stream configuration changes in such a way that could still be supported by the recorder, but might be something the code would want to respond to. For example, new tracks being added/removed while recording.

Received on Monday, 1 April 2013 21:37:24 UTC