W3C home > Mailing lists > Public > public-fx@w3.org > April to June 2011

Re: generators in filters (or maybe even "anywhere there are images")

From: Simon Fraser <smfr@me.com>
Date: Fri, 22 Apr 2011 11:12:53 -0700
Cc: Dean Jackson <dino@apple.com>, public-fx@w3.org
Message-id: <0811E07C-8DEA-4433-93BF-DC7412CFA04C@me.com>
To: David Singer <singer@apple.com>
On Apr 22, 2011, at 11:07 AM, David Singer wrote:

> On Apr 19, 2011, at 18:14 , Dean Jackson wrote:
> 
>> This is something that came up while I was editing the filters specification.
>> 
>> There are some existing filters (eg feFlood, feTurbulence, feImage) and well-known effects (lenticular halos are a commonly overused example, also [1]) which generate images from scratch rather than manipulate inputs. These don't really fit into the model of the 'filter' property, which is a linear chain of effects, since any input would be completely wiped out by the new image. They work in the model of the 'filter' element, since that can be declared as a graph that composites multiple inputs.
>> 
>> How important do we consider these effects as part of the property?
> 
[snip]
> I have long thought it strange, if not insane, that we generate textures at the authoring side, and then try to compress them for transmission (when, in fact, good textures are often 'noisy' and hard to compress) instead of sending the parameters to (for example) a reaction-diffusion texture generator.

I think canvas/WebGL and -moz-image (or -webkit-canvas) are one approach to this issue, and should constrain us from making generators too crazy. Simple things should be easy, hard things should be possible (via canvas, in this case).

Simon
Received on Friday, 22 April 2011 18:13:47 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 22 April 2011 18:13:48 GMT