W3C home > Mailing lists > Public > www-style@w3.org > September 2014

Re: [css-snappoints] Blink team position on snap points

From: Rick Byers <rbyers@chromium.org>
Date: Tue, 23 Sep 2014 10:16:30 -0400
Message-ID: <CAFUtAY8f82jySKXE7jSdFGFHodAJGqR=Sh6D9iP2XTQ-J2Bbgw@mail.gmail.com>
To: Rafal Pietrak <rafal@ztk-rp.eu>
Cc: "www-style@w3.org" <www-style@w3.org>
There's definitely tension between low-level input events and high-level
semantic actions.  I think progress on the latter is mainly being done in
the IndieUI WG <http://www.w3.org/WAI/IndieUI/> (with public-indie-ui being
the best place for discussion).  I'd also consider some existing DOM3 events
<http://www.w3.org/TR/DOM-Level-3-Events/> like "click" and "contextmenu"
to fall into the latter camp as well (with www-dom probably being the right
place for that discussion).

But no matter how well we do standardizing such high-level actions, I don't
think we'll ever want to stop supporting low-level events as well in order
to enable the full richness of possible user interactions.  Expanding on
your analogy - just because we mostly program keyboard input in terms of
character codes, there are scenarios (like gaming) where it's appropriate
to drop down the the lower level of abstraction of scan codes.


On Tue, Sep 23, 2014 at 9:57 AM, Rafal Pietrak <rafal@ztk-rp.eu> wrote:

> Hello the list,
> If the following have already been discussed, pls point me there.
> The problem, that I think needs attention, shows up (for example) in
> googles team specs indicated earlier to the list....
> W dniu 18.09.2014 o 20:35, Rick Byers pisze:
> [--------------------]
>> Note that one motivation for this approach (but certainly not our only
>> one) is Google's material design (http://www.google.com/design/
>> spec/material-design/introduction.html). There are a number of
>> scroll-linked effects that will be shipping in mobile apps which are
> [-----------------------]
> ... where in "Layout / Principles / Dimentionality --> conceptual model"
> we read: "swiping to dismiss".
> This is a problem I'd like to point out. It is not related to original
> Rich Byers subject, but I find it usefull to show my case on that example.
> I find the problem *extremally* similar to the "ancient pre X3.64 era",
> when terminal manufacturers did implement "cursor left" with every
> imaginable "control/key sequences".
> This may sound trivial but problem/resolution relates to the fact, that
> todays' applications don't expect "scan codes" from keyboard, but they do
> expect "character codes" instead.
> Yet, this is not exactly the case of touch-pad, or touch-display inputs.
> People (like javascript application designers) seem to expect "swipe"
> events (from browser), not a "dismiss" event (as of that google document on
> design "swipe to dismiss"). This is much like expecting to get a scan code
> from a keyboard, instead of an utf8 char.
> Admittedly, the indicated google document has a section "Patterns  /
> Gestures", which elaborates on "styling" visual responsiveness to HID
> input. And in that section, one finds a "sort of"  resolution, which I will
> try to present here. There is a notion of "touch mechanics", which looks
> like a "technical implementation" (scan code) of an "intended action"
> (utf8-char) in question.
> I hope this forum will recognize (and possibly standarise) the distinction
> of:
> 1. action events (like: dismiss, pane, scale-up/down, select/focus-object,
> select-range, etc), which are to be delivered though developer's API (like
> javascript function/event/callback) to interested applications. And ....
> 2. HID action (like: swipe, tap, fling, etc) to be available only at the
> system level, and mapped to "action events" (above) by system configuration
> (much like one uses keyboard mapping application to assign "keys
> /positions", to  "character codes" they emit to applications).
> I would think, that "action events" should be standarized and listed
> within javascript API, while HID actions should be removed from there. This
> may not be as strightforward as keyboard mapping (due to contextual
> modality), but possibly could be worked out.
> In consequence IMHO, there should be no distinction for an application
> developer, between events (like "swipe to dismiss") initiated my mouse or
> by touch pad, or by touch screen, etc. An application should receive a
> "dismiss" event, no matter where it came from. I would think, that for
> example a TV application should receive a "dismiss" event either from a
> remote control, or from a "camera-captured-recognised-hand-gesture".
> I would think, that what google document in "Patterns / Guestures"
> describes as: "Zoom-in" action implemented by three different HID actions:
> "double-touch, double-touch-drag, pinch-open" -- should not happen
> "naturally". One shouldn't expect that; just like one does not expect
> letter "A" to be emitted by three different keys on a keyboard (but just
> like with a keyboard, it should be possible to configure that if desired).
> I hope this list is an apropriate forum for this sort of proposal (I
> vagually remember seeing javascript API discussions here). My apologize if
> not.
> Naturally, it that problem has already been tackled by the list, I'd
> apreciate pointers so I could get acquainted with current consensus.
> -R
Received on Tuesday, 23 September 2014 14:17:26 UTC

This archive was generated by hypermail 2.4.0 : Friday, 25 March 2022 10:08:46 UTC