Re: [css-snappoints] Blink team position on snap points

Hello the list,

If the following have already been discussed, pls point me there.

The problem, that I think needs attention, shows up (for example) in 
googles team specs indicated earlier to the list....

W dniu 18.09.2014 o 20:35, Rick Byers pisze:
[--------------------]
>
> Note that one motivation for this approach (but certainly not our only 
> one) is Google's material design 
> (http://www.google.com/design/spec/material-design/introduction.html). 
> There are a number of scroll-linked effects that will be shipping in 
> mobile apps which are
[-----------------------]


... where in "Layout / Principles / Dimentionality --> conceptual model" 
we read: "swiping to dismiss".

This is a problem I'd like to point out. It is not related to original 
Rich Byers subject, but I find it usefull to show my case on that example.

I find the problem *extremally* similar to the "ancient pre X3.64 era", 
when terminal manufacturers did implement "cursor left" with every 
imaginable "control/key sequences".

This may sound trivial but problem/resolution relates to the fact, that 
todays' applications don't expect "scan codes" from keyboard, but they 
do expect "character codes" instead.

Yet, this is not exactly the case of touch-pad, or touch-display inputs. 
People (like javascript application designers) seem to expect "swipe" 
events (from browser), not a "dismiss" event (as of that google document 
on design "swipe to dismiss"). This is much like expecting to get a scan 
code from a keyboard, instead of an utf8 char.

Admittedly, the indicated google document has a section "Patterns  / 
Gestures", which elaborates on "styling" visual responsiveness to HID 
input. And in that section, one finds a "sort of"  resolution, which I 
will try to present here. There is a notion of "touch mechanics", which 
looks like a "technical implementation" (scan code) of an "intended 
action" (utf8-char) in question.

I hope this forum will recognize (and possibly standarise) the 
distinction of:
1. action events (like: dismiss, pane, scale-up/down, 
select/focus-object, select-range, etc), which are to be delivered 
though developer's API (like javascript function/event/callback) to 
interested applications. And ....
2. HID action (like: swipe, tap, fling, etc) to be available only at the 
system level, and mapped to "action events" (above) by system 
configuration (much like one uses keyboard mapping application to assign 
"keys /positions", to  "character codes" they emit to applications).

I would think, that "action events" should be standarized and listed 
within javascript API, while HID actions should be removed from there. 
This may not be as strightforward as keyboard mapping (due to contextual 
modality), but possibly could be worked out.

In consequence IMHO, there should be no distinction for an application 
developer, between events (like "swipe to dismiss") initiated my mouse 
or by touch pad, or by touch screen, etc. An application should receive 
a "dismiss" event, no matter where it came from. I would think, that for 
example a TV application should receive a "dismiss" event either from a 
remote control, or from a "camera-captured-recognised-hand-gesture".

I would think, that what google document in "Patterns / Guestures" 
describes as: "Zoom-in" action implemented by three different HID 
actions: "double-touch, double-touch-drag, pinch-open" -- should not 
happen "naturally". One shouldn't expect that; just like one does not 
expect letter "A" to be emitted by three different keys on a keyboard 
(but just like with a keyboard, it should be possible to configure that 
if desired).

I hope this list is an apropriate forum for this sort of proposal (I 
vagually remember seeing javascript API discussions here). My apologize 
if not.

Naturally, it that problem has already been tackled by the list, I'd 
apreciate pointers so I could get acquainted with current consensus.

-R

Received on Tuesday, 23 September 2014 13:58:09 UTC