RE: Properties of Custom Paint

I think you have just described WebGL.

> -----Original Message-----
> From: François REMY [mailto:francois.remy.dev@outlook.com]
> Sent: Wednesday, May 20, 2015 17:34
> To: 'Ian Kilpatrick'; public-houdini@w3.org
> Subject: RE: Properties of Custom Paint
> 
> > We wanted to start kicking off more discussions on the mailing list.
> > As a starting point Shane & I sat down and wrote up what we think are
> > the desirable properties of custom paint.
> >
> > See https://gist.github.com/bfgeek/758b54a1cab65dff9d69

> 
> Hi,
> 
> I was wondering whether similar problems had already been faced in other
> areas of the web platform and what we could learn from there, and thought
> about the following thing: Web Audio. Indeed, audio processing is entirely
> done on sound coprocessors nowadays and the Web Audio's Editors wanted
> to make sure the whole audio API would not prevent them from delegating
> the work to those coprocessors; this is a similar problem to how we have to
> make sure the GPU does most of the work and we don't want to give too
> much imperative expressiveness to the custom paint at the wrong location in
> the pipeline but somehow more of a declarative form whose building blocks
> can be imperative but are handled at some other level.
> 
> Let's see how the comparison work: Some "Web Audio elements" are native
> elements which are "painted/vocalized" by the browser (like "mp3 files").
> However, you can "custom-paint/vocalize" some other sounds by using tone
> generators (which we could assimilate to the DrawRectangle operation in our
> 2d case) or by streaming your own file generated on the fly (which is about
> reusing the browser native logic but by providing yourself the input to
> interpret, much like we do on an HTML Canvas).
> 
> The magic about Web Audio is that the whole thing can be described as a
> graph of operations which can be done in parallel (channel processing like
> gain/frequence-balancing), and then joins where appropriate in "mixing"
> nodes (which we could interpret as "compositing" in our 2d world). In short,
> the graph represents what has to be orchestrated by the browser, and it's
> still up to the browser to orchestrate this work.
> 
> I was wondering whether or not it would be feasible to have the paint phase
> generate such a "Web Paint" graph instead of going directly to the GPU [of
> course, only when an element is in Custom-Paint mode, since we want it
> with reflective info about which content/background/border/outline/section
> of an element is the source of each painting node), and let some background
> worker modify these instructions to its liking (for instance by adding some
> GPU shaders at some points, by adding some new paints at interesting
> locations, etc). When this is done, we would convert back this modified "Web
> Paint" graph to a set of GPU instructions.
> 
> In short, I propose to serialize the painting step of the browser in a graph, let
> a script change the resulting graph, and then unserialize the modified graph
> and send it to the GPU.
> 
> An issue would be that browser probably don't paint the same things the
> same way, but if we stay on a high-level-enough we may have success in
> standardizing the basic building blocks to move forward.
> 
> What do you think? Does the Web Audio analogy hold? Do you have a better
> one?
> Best regards,
> François

Received on Wednesday, 20 May 2015 21:50:47 UTC