W3C home > Mailing lists > Public > public-webapps@w3.org > April to June 2015

Re: Pointer lock spec

From: Florian Bösch <pyalot@gmail.com>
Date: Wed, 1 Apr 2015 09:03:07 +0200
Message-ID: <CAOK8ODh=mAPfQHwjxG+2vrPe7Mw1Lv+wuwotsPvdZjdWBRXJVQ@mail.gmail.com>
To: Vincent Scheib <scheib@google.com>
Cc: Philip Jägenstedt <philipj@opera.com>, Webapps WG <public-webapps@w3.org>
On Wed, Apr 1, 2015 at 1:49 AM, Vincent Scheib <scheib@google.com> wrote:
>
> You raised this point in 2011, resulting in my adding this spec section
> you reference. The relevant bit being:
> """
> ... a concern of specifying what units mouse movement data are provided
> in. This specification defines .movementX/Y precisely as the same values
> that could be recorded when the mouse is not under lock by changes in
> .screenX/Y. Implementations across multiple user agents and operating
> systems will easily be able to meet that requirement and provide
> application developers and users with a consistent experience. Further,
> users are expected to have already configured the full system of hardware
> input and operating system options resulting in a comfortable control the
> system mouse cursor. By specifying .movementX/Y in the same units mouse
> lock API applications will be instantly usable to all users because they
> have already settled their preferences.
> """
>
As of yet nobody has provided higher resolution values though.


> As an application developer I agree the unprocessed data would be nice to
> have, but I don't see it as essential. The benefits of system calibrated
> movement are high. Not requiring users to configure every application is
> good. And, as the Chrome implementation maintainer who has been in
> conversation with several application developers (Unity, Unreal,
> PlayCanvas, GooTechnologies, Verold, come to mind easily) this has not been
> raised yet as an issue.
>
I distinctly remember playing games (and reading articles about) mouse
coordinates in pixel-clamped ranges. Particularly when buying my first high
resolution mouse this was quite an issue with some games. As now I had a
very high precision pointing instrument, but viewpoint changes where pixel
clamped. To get better resolution, I had to go to the OS setting, and
ratched up mouse sensitivity to the max, then go to the game settings, and
counteract that sensitivity setting so it became operable, hence extracting
more ticks out of a flawed system. Of course once I closed the game again,
I had to undo the OS mouse sensitivity setting again in order to make the
desktop usable.

It might be less of an issue then back when 1024x786 was the state of the
art. But 1920x1080 is less than twice the horizontal/vertical resolution
than back then, so I'm pretty sure this is still a significant issue, and
pointers haven't gotten any less precise since then (however OSes haven't
gotten any smarter with pointers).


> I'm not certain how to address this in the specification. I agree that
> poor rendering latency will impact the use of an application drawn cursor
> or other use, and that some applications and devices may have poor
> experiences. That said, what should we change in the specification, or what
> notes would you suggest are added to FAQ on this topic?
>
This is essentially a whole-system integration issue. In order to fix it,
the whole stack (drivers, I/O systems, kernels, shells, browsers, hardware)
needs to get its act together:

   - I/O RTT latencies (input -> screen) of more than 10ms is not an
   appropriate state of affairs for the year 2015. The benchmark number would
   be OS cursors, which are around 30ms. Even native games struggle to get
   below 60ms, and for browsers it's usually worse.
   - *One* millisecond on a modern computer (PCIe, sandybridge, ssd drive)
   runs 6000 floating point operations, 16'000 transfers on the bus, 160'000
   cycles, ~40'000 ram load/stores
   - L1 latency ~0.000025ms, ram latency ~0.00008ms, SSD random access
   ~0.1ms

So to put that into perspective, the I/O latencies we have today (let's go
with ~100ms for browsers) is  1.25x million times bigger than ram latency,
1000x bigger than permanent storage latency. It's about 8x longer than it
takes you to ping google across 6 hops. IO latencies in todays systems are
insanely high. And the numbers haven't gotten much better in the last 20
years (in fact you could argue they've gotten a lot worse, but that's a
topic for another day).

So I think if you're writing any specification dealing with I/O. It should
contain very strong language addressing latency. It can specify whole
system latency requirements as a "level of support" queries. If a whole
system is able to achieve < 10ms latencies, the API should be able to
indicate that fact (let's say support level gold), if it reaches < 60ms,
that's say silver, and > 60ms that's support level turd. What's simply not
sustainable is the frankly insane situation in regards to I/O latencies is
gone unmentioned, unfixed and not made transparent. We tried that for the
last oh 20 years. It. Doesn't. Work.
Received on Wednesday, 1 April 2015 07:03:35 UTC

This archive was generated by hypermail 2.3.1 : Friday, 27 October 2017 07:27:31 UTC