W3C home > Mailing lists > Public > public-pointer-events@w3.org > October to December 2012

RE: Some comments about Pointer Events

From: Jacob Rossi <Jacob.Rossi@microsoft.com>
Date: Tue, 20 Nov 2012 09:17:01 +0000
To: 'François REMY' <francois.remy.dev@outlook.com>, Pointer Events WG <public-pointer-events@w3.org>
Message-ID: <D0BC8E77E79D9846B61A2432D1BA4EAE13841C2B@TK5EX14MBXC288.redmond.corp.microsoft.com>
From: François REMY [mailto:francois.remy.dev@outlook.com] 
To: Jacob Rossi; Pointer Events WG

>> From: Jacob.Rossi@microsoft.com
>> To: francois.remy.dev@outlook.com; public-pointer-events@w3.org
>> 
>> I think you misunderstand "height." Height (along with
>> width) refers to the contact geometry of the touch. It's not the 
>> distance from the touchscreen, but the vertical length of the physical 
>> contact your finger makes with the screen.
>>
> Indeed, I misinterpreted "height" as a kind of depht. Nevermind. If there's no specification for the depht, we should just report this discussion until someone feels the need to implement it, as the final form will indeed depends on the use we intend to make out of it.
>
>
>> Since it's normalized, I usually think of pressure as a modifier. For example, in the line drawing scenario, I would do something like: lineWidth = pressure * MAX_LINE_WIDTH. 
>
>I think about it, too. However, if you do so, you'll get no line at all using a mouse (pressure=0) and as far as I can tell, there's no way to know if the input device supports pressure or not.

I forgot to address this bit.  I agree with your feedback that the current behavior for mouse (or any device that doesn't support pressure) isn't ideal. Perhaps a mouse should report 1 when any button is down and 0 when not?  

In general, I think this is something worth discussing further in the context of all the device-specific properties (width, height, pressure, tiltX, tiltY). Should pointer events provide an indication that hardware doesn't support the property? Or, should user agents instead supply a value that is "typical" of the device being used. For example, on touchscreens that do not report contact geometry (width/height) it might be more appropriate to provide a value of 40px (average index finger width). I need to check, but I'm not sure that whether devices support these properties is readily available to user agents (it may be that they just provide "0" for these values).

>> So while, yes, it might require different amounts of pressure to produce the same effect across devices, at least every device is interoperable in the sense that it can produce the same *range* of values. In the line drawing scenario, I'm not sure that I want one device to not be able to draw as thick of a line as another simply because it cannot register that much pressure.

>So, basically, you propose a model where an user using a better stylus has to press harder to get the same effect as someone which has a very simple one which only handles small pressure levels? To me, this seems a deal breaker, and it would lead to the impossibility for a stylus vendor to expose publicly the higher range if he want to keep the user's habit of a normal stylus.

>By the way, you're already limited in the set of actions you can do using your input device because of its other limitations. If your stylus as no erase button, you can't erase. If it has no secondary button, you can't use it either. On a mouse, you can't support pressure at all. I don't see why it's a problem to say that on a device A you can achive a [0-3] range while on device B you can achieve a [0-5] range.

Let me check more on this. I think that the USB HIT spec may well specify physical units of pressure, rather than normalized. However, I do know that the pressure values are already normalized by the OS (at least, on Windows) before it reaches the user agent.

>Also, some mouses allow to "adjust" sensibility. Similarily, my model would allow pressure-enabled sensors to "shift" the normal value they report (or their internal scale) to allow higher or lower returned values for the same physical pressure, as an user accomodation measure, and without any range clamping. If you keep using a [0-1] scale, you can only decrease precision, not increase it (or else, you reduce the pression range where you can work without clamping).

For the use case of user accommodation, the most important ability is to alter what the min/max's are per user preference (as you indicate). I would have no problem if we wanted to change the spec to give more flexibility to UAs to choose the physical pressures that map to 0 (min) and 1 (max);  per user setting or calibration, for example.

>> I'm also not confident that devices report consistent pressure values (e.g. if you place a 10kg weight on a stylus then it produces the exact same pressure value on any device you put it on). So I don't think you could standardize around "standard pressure" (meaning a fixed physical amount of pressure).

>It's up to each device to specify its "normal" value, not to this specification. In the absence of any information, then it's possible to try to guess the normal value using the available data (reasonable default), but it's just an error-recovery situation. 		

I really only see 2 approaches:

1) Normalization (as specified currently)
2) Physical units

Even in #2, devices don't specify what's "normal." Normal would simply be whatever physical amount of pressure is typical of users.  I'd be open to #2 if we could demonstrate that hardware broadly supports this (e.g. we could get interoperable implementations on various devices). 

Can you give me a "for instance" for you proposed approach? Alternatively, this might be easier to discuss on the second call (I will not be on the first call next week).

-Jacob
Received on Tuesday, 20 November 2012 09:18:22 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 20:20:24 UTC