Re: generators in filters (or maybe even "anywhere there are images")

On Apr 22, 2011, at 11:07 AM, David Singer wrote:

> On Apr 19, 2011, at 18:14 , Dean Jackson wrote:
> 
>> This is something that came up while I was editing the filters specification.
>> 
>> There are some existing filters (eg feFlood, feTurbulence, feImage) and well-known effects (lenticular halos are a commonly overused example, also [1]) which generate images from scratch rather than manipulate inputs. These don't really fit into the model of the 'filter' property, which is a linear chain of effects, since any input would be completely wiped out by the new image. They work in the model of the 'filter' element, since that can be declared as a graph that composites multiple inputs.
>> 
>> How important do we consider these effects as part of the property?
> 
[snip]
> I have long thought it strange, if not insane, that we generate textures at the authoring side, and then try to compress them for transmission (when, in fact, good textures are often 'noisy' and hard to compress) instead of sending the parameters to (for example) a reaction-diffusion texture generator.

I think canvas/WebGL and -moz-image (or -webkit-canvas) are one approach to this issue, and should constrain us from making generators too crazy. Simple things should be easy, hard things should be possible (via canvas, in this case).

Simon

Received on Friday, 22 April 2011 18:13:47 UTC