W3C home > Mailing lists > Public > public-html@w3.org > March 2012

Re: video.getContext

From: Charles Pritchard <chuck@jumis.com>
Date: Fri, 09 Mar 2012 08:25:21 -0800
Message-ID: <4F5A2EF1.4080207@jumis.com>
To: robert@ocallahan.org
CC: Sean Hayes <Sean.Hayes@microsoft.com>, Kornel LesiƄski <kornel@geekhood.net>, "public-html@w3.org" <public-html@w3.org>
On 3/9/12 7:22 AM, Robert O'Callahan wrote:
>     I don't think the image filters WD handles time varying content
>     (i.e. access to past frames), which might be required for some
>     video processing. So having a filter script context along the same
>     lines as the audio context would probably make some sense; CSS
>     application notwithstanding.
> I have a proposal (and prototype implementation) for an API for 
> processing MediaStreams:
> https://dvcs.w3.org/hg/audio/raw-file/tip/streams/StreamProcessing.html
> It's targeted at audio right now (as an alternative to Chris Rogers' 
> Web Audio API), but designed to extend to video as well. (It's 
> probably not worth adding video processing until we have clear use 
> cases and WebGL support in Workers.)
> In general I don't think APIs like canvas's getContext() are a good 
> pattern to follow. It's simpler just to add a named method or 
> attribute for each kind of object that can be returned.

I'd like to see your stream processing proposal used with Canvas 
ImageData at some point.

I've suggested that at some point, CSS shaders may be able to use JS 
workers in addition to their primary target of fragment shaders.

Your processing API sets up a foundation for it.

The getContext from Canvas has worked well over its lifetime. I wouldn't 
want to apply it to many other elements. Canvas is an area where we've 
been able to replace the <object> tag in some sense. The getContext 
method has helped with that.

Received on Friday, 9 March 2012 16:25:45 UTC

This archive was generated by hypermail 2.4.0 : Saturday, 9 October 2021 18:45:50 UTC