beforescroll as the "intention" contract for scrolling

I just read your explainer
<>, and
although I don't have much context on editing in particular, the high level
concepts really resonated with me.  We (blink input team) have been
struggling with related problems around input for awhile and now have a
we're pretty excited about.

Our 'beforescroll' event corresponds exactly to your definition of an
'intention' for scrolling.  However our motivation is pretty different from
what you describe as: "... several issues including difficultly
understanding what a user intends, complexity in building Accessible sites,
and complex localization".  These are probably problems for scrolling too,
but the bigger problems in our mind are around composition and
extensibility.  Without a contract for the intention, components that
transform behavior into actions can't compose properly with each other.  In
the context of scrolling, this means it's impossible for JavaScript to
customize scrolling because they'd have to re-implement everything
themselves and there's no API exposed for some browser behavior so no way
to compose with it. By defining a contract for mediating composition (the
event that defines the intention), we enable customization of actions
(custom generators of the event) and customization of behavior (custom
consumers of the event) independently in a way that composes properly with
other actions and behaviors for the same intention.  We see this as an
important step towards a more extensible web
<> (cleanly dividing the platform into a
kernel of core primitives and a framework layer of optional components
built on top of those primitives).  If the abstraction also enables better
accessibility, understandability and localization too then that's great too.

Although it's not really my area of expertise, I think we also have
composition/extensibility problems with text editing in blink as well.  Eg.
how is a rich custom editor like Google Docs (which manages text layout and
selection itself in JS) supposed to properly integrate with the text
editing facilities of a touch-centric browser?  What if you want to build
an editor which does all it's rendering using canvas or WebGL instead of
DOM?  As you work on the appropriate "intention" APIs for text editing, I'd
urge you to consider how it can make the platform more easily extensible -
breaking down what today are monolithic "magic" browser features into
components that can be used or replaced individually according to the needs
of the app.  If this is a direction you're interested in and would like
involvement from the blink team, I can try to find the right people to get
involved - this is directly relevant to blink's high-level mission of
making the web more competitive with native mobile platforms.


Received on Tuesday, 23 September 2014 21:01:09 UTC