W3C home > Mailing lists > Public > public-pointer-events@w3.org > April to June 2015

Re: [DOM3Events] InputDevice API sketch

From: Rick Byers <rbyers@chromium.org>
Date: Wed, 8 Apr 2015 22:13:55 -0400
Message-ID: <CAFUtAY-ttxpHX6wA2BrooabheSWKi5XiatgRK2g4gcid0bH3rQ@mail.gmail.com>
To: François REMY <francois.remy.dev@outlook.com>
Cc: DOM WG <www-dom@w3.org>, Pointer Events WG <public-pointer-events@w3.org>, Gary Kacmarcik (Кошмарчик) <garykac@chromium.org>, Domenic Denicola <d@domenic.me>, Mustaq Ahmed <mustaq@chromium.org>
Thanks for your detailed response François, and sorry it took me so long to
reply.

I like the idea of provide more abstract input device information, and
there are a number of nice properties in your design.  However it's hard
for me to imagine a big highly-abstract API like this really being
successfully deployed to the web platform.  It's not enough for a browser
to just implement this API (as you say, mostly hard-coding it based on the
less-abstract APIs provided by the host OS), but to be really valuable each
browser would have to also do work per input device.  In practice I'm not
sure we'd really get browser vendors excited enough to implement more than
the most common devices (basic mice, touchscreens, stylus, keyboard,
etc.).  In which case all the abstraction and complexity wouldn't buy us
much.

I'm personally most interested in incremental approaches where we can add
some small surface area but have the ability to efficiently iterate adding
small feature after feature as the need arises (without being afraid to
even approach the standards group due to the expected effort required).
I.e. after months of debate, let's see if we can get even my simple "does
this device fire touch events" issue addressed by a W3C spec before we
attempt something with orders of magnitude more abstraction and
bikeshedding opportunities ;-).  I have low confidence that we could design
an API in advance which will automatically work great for new devices
without API changes (eg. even your highly generalized API is probably no
better at supporting the new Mac force touch touchpad than pointer events
would be).  So to me being able to iterate efficiently is more important
that predicting all possible use cases in advance with a highly abstract
API.

However, I think this just highlights the problem of the specialist
scenarios being blocked on the standards consensus process and browser
prioritization funnel (which - when it works at all - really works only for
the most common use cases).  As long as the browsers need to add code to
support each new type of input device (especially when new API surface is
involved), we're always going to be playing catch-up (eg. imagine if leap
motion couldn't successfully ship a device without convincing the Windows,
Darin and Linux kernel teams to implement some feature, blocking on
adequate deployment of updated OS versions).  Personally I believe the only
practical path to the sort of fully generic device support utopia you're
advocating for here is via simpler standard primitives.  Eg. perhaps with
low-level web bluetooth and USB APIs you could implement the abstract API
you're looking for here in a JS library?  Once a particular device/scenario
gains popularity we can then justify investments to make that particular
scenario better (eg. perhaps with a higher level API that can be
implemented without bothering the user with a permissions prompt).

Still I think there are other ways to make pragmatic incremental progress
towards some of your goals.  Eg. we've talked about raw touchpad access in
PEWG a few times, and it's something I'm moderately interested in seeing
happen (eg. for handwriting recognition scenarios).  I believe Jacob (PE
editor) and I agree that a small incremental extension to PointerEvents /
pointer lock could address that use case - but we'd really need a champion
for the use case to drive it in the WG (Microsoft doesn't generally have
adequate touchpad APIs / hardware, Google cares most about phones, and
Apple doesn't participate in PE).  If anyone is interested in helping push
this (especially someone that would actually consume the API in a shipping
product), public-pointer-events is the place to do so (I promise to help
facilitate).

So I think it's most important that we establish some minimally abstract
framework that is simple while still flexible enough to cover the variety
of most likely use cases.  I think PointerEvents is part of this, but we
always knew we were missing some place to hang device query APIs off.
Hopefully a simple InputDevice API can give us the other half.  This still
doesn't do much to help non-pointer devices but perhaps that's not critical
enough to worry too much about - I don't know.

Sorry not to be more positive - just trying to be realistic about how the
web evolves in practice...

Rick





On Sun, Mar 29, 2015 at 6:56 AM, François REMY <
francois.remy.dev@outlook.com> wrote:

>  Hi Rick (and all the others, too ^_^),
> *Sorry for taking a week to reply, I really needed that time to put my
> thoughts in order and write them down :D *
>
> *I was very glad to see some work being done in order to define the Input
> Device drivers of the web*, and I wanted to share my vision with you on
> the matter. Like you, I believe that Pointer Events require some more
> metadata (coarsity, drag inertia, …) on the devices generating
> the events to be able to handle new types of inputs properly and I'm a huge
> fan of your proposal. My free time is overbooked but if some working group
> discussed this further I would be happy to participate in refining your
> proposal.
>
> *Yet, I believe there's some important challenge it doesn't tackle*.
> Something I don't like in general about Input Devices at this time is how
> "customized" any good device handling has to be, and how native apps often
> have to rely on device-specific userland drivers to harness the true device
> capabilities (those are not available to the web). I argue the reason of
> this issue is the lack of a sufficiently generic class of devices which
> could encompass multiple sensors and ways to interpret them, providing each
> app with the pieces of info it want with the granularity it needs; to sum
> up, I think we lack a metamodel unifying even more input devices than ever
> before.
>
> *Let's be honest*, I think the proposal I'm going to make is probably far
> away from how most input devices are handled at the OS level at the time I
> write it. As a result, I'm conscient an initial implementation of the
> proposal will have to mock the features using the native input events of
> the existing platforms (a clear con). I would argue (though) that it is the
> case of currently implemented APIs like the Gamepad API and the PointerLock
> specification, from which this proposal draws ideas from.
>
> *However, I think the API I propose is promising *and has some
> interesting features; though I'll of course let you be your own judge on
> that matter ;-) I've put toegether my ideas in a draft document [1] and an
> API surface [2].
>
> [1]
> https://docs.google.com/document/d/1Ubgn0uZ73DGkBnEPKJ5CfSk-YPu5MDjq_We1gy1kFEg/edit?usp=sharing
> [2] https://gist.github.com/FremyCompany/a70ee6a02d54c3d9521d
>
> *I would be interested to hear about what you guys think *about this
> proposal, because I can't make this happen without you! Please feel free to
> comment on the Google Doc, on the Gist or by email. Please make me happy:
> my main intent is to initiate a discussion :-)
>
> Best regards,
> François
>
> _________________
> *PS:*
> Please note that for the purpose of this exercice, I included devices
> ranging from Webcams to Touchscreens (via Depth-cams and Smart pens with
> gyros; but *not* next gen devices like the Leap Motion or the Kinect
> Skeleton Tracking, as I don't have enough experience with such devices).
> However I worked on cutting-edge and amazing input device scenarios like
> using the Mac's touchpad as a multitouch input [1], a Microsoft Touch Mouse
> [2], or a calibrated webcam/dephtcam [3].
>
> [1]
> *http://notebooks.com/2011/08/10/touchgrind-brings-multitouch-gaming-to-the-mac/*
> <http://notebooks.com/2011/08/10/touchgrind-brings-multitouch-gaming-to-the-mac/>
> [2]
> *http://www.hanselman.com/blog/AbusingTheMicrosoftResearchsTouchMouseSensorAPISDKWithAConsolebasedHeatmap.aspx*
> <http://www.hanselman.com/blog/AbusingTheMicrosoftResearchsTouchMouseSensorAPISDKWithAConsolebasedHeatmap.aspx>
> [3] *https://youtu.be/YXJ9dmxsc6w* <https://youtu.be/YXJ9dmxsc6w>
>
>
> *De :* Rick Byers <rbyers@chromium.org>
> *Envoyé :* ‎lundi‎ ‎9‎ ‎mars‎ ‎2015 ‎16‎:‎07
> *À :* DOM WG <www-dom@w3.org>, Gary Kacmarcik (Кошмарчик)
> <garykac@chromium.org>, Domenic Denicola <d@domenic.me>, Mustaq Ahmed
> <mustaq@chromium.org>
>
> In our latest discussion of how best to identify mouse events derived from
> touch events, I proposed a 'sourceDevice' property
> <https://lists.w3.org/Archives/Public/www-dom/2015JanMar/0052.html>.  Domenic
> asked <https://lists.w3.org/Archives/Public/www-dom/2015JanMar/0052.html>
> for a rough sketch of what such an InputDevice API might grow to become.
> Here
> <https://docs.google.com/a/chromium.org/document/d/1WLadG2dn4vlCewOmUtUEoRsThiptC7Ox28CRmYUn8Uw/edit#>
> is my first attempt at such a sketch, including some detailed references to
> similar APIs in other platforms.  Any thoughts?
>
> Note that at the moment I'm primarily interested in standardizing and
> implementing the 'firesTouchEvents' bit.  However if it makes more sense
> for coherency, I'd also support adding (and implementing in chromium) a few
> of the other non-controversial pieces.
>
> Rick
>
>
Received on Thursday, 9 April 2015 02:14:44 UTC

This archive was generated by hypermail 2.3.1 : Saturday, 16 May 2015 00:31:59 UTC