Re: Properties of Custom Paint

Hi François,

A few of us have gone down this thought path as well. We agree that it'd be
nice you have something that could eventually just run on the GPU.
Something like a shader as Domenic suggested. You could imagine in the
future, being able to write a shader in script (or a subset of javascript
like asm.js? :) as a custom paint callback.

This would require a lot of spec work, and wouldn't build on existing
primitives like the 2d canvas api.

We think that the API should be open to this type of solution, but not
require it at the moment. E.g. it should be future proof.

Thanks,
Ian

On Wed, May 20, 2015 at 5:34 PM, François REMY <
francois.remy.dev@outlook.com> wrote:

> > We wanted to start kicking off more discussions on the mailing list.
> > As a starting point Shane & I sat down and wrote up what we think
> > are the desirable properties of custom paint.
> >
> > See https://gist.github.com/bfgeek/758b54a1cab65dff9d69
>
> Hi,
>
> I was wondering whether similar problems had already been faced in other
> areas of the web platform and what we could learn from there, and thought
> about the following thing: Web Audio. Indeed, audio processing is entirely
> done on sound coprocessors nowadays and the Web Audio's Editors wanted to
> make sure the whole audio API would not prevent them from delegating the
> work to those coprocessors; this is a similar problem to how we have to
> make sure the GPU does most of the work and we don't want to give too much
> imperative expressiveness to the custom paint at the wrong location in the
> pipeline but somehow more of a declarative form whose building blocks can
> be imperative but are handled at some other level.
>
> Let's see how the comparison work: Some "Web Audio elements" are native
> elements which are "painted/vocalized" by the browser (like "mp3 files").
> However, you can "custom-paint/vocalize" some other sounds by using tone
> generators (which we could assimilate to the DrawRectangle operation in our
> 2d case) or by streaming your own file generated on the fly (which is about
> reusing the browser native logic but by providing yourself the input to
> interpret, much like we do on an HTML Canvas).
>
> The magic about Web Audio is that the whole thing can be described as a
> graph of operations which can be done in parallel (channel processing like
> gain/frequence-balancing), and then joins where appropriate in "mixing"
> nodes (which we could interpret as "compositing" in our 2d world). In
> short, the graph represents what has to be orchestrated by the browser, and
> it's still up to the browser to orchestrate this work.
>
> I was wondering whether or not it would be feasible to have the paint
> phase generate such a "Web Paint" graph instead of going directly to the
> GPU [of course, only when an element is in Custom-Paint mode, since we want
> it with reflective info about which
> content/background/border/outline/section of an element is the source of
> each painting node), and let some background worker modify these
> instructions to its liking (for instance by adding some GPU shaders at some
> points, by adding some new paints at interesting locations, etc). When this
> is done, we would convert back this modified "Web Paint" graph to a set of
> GPU instructions.
>
> In short, I propose to serialize the painting step of the browser in a
> graph, let a script change the resulting graph, and then unserialize the
> modified graph and send it to the GPU.
>
> An issue would be that browser probably don't paint the same things the
> same way, but if we stay on a high-level-enough we may have success in
> standardizing the basic building blocks to move forward.
>
> What do you think? Does the Web Audio analogy hold? Do you have a better
> one?
> Best regards,
> François
>

Received on Thursday, 21 May 2015 18:25:58 UTC