Re: convertKeyIdentifier

Hi, Olli-

Olli Pettay wrote (on 9/23/09 12:50 PM):
> On 9/23/09 4:53 AM, Maciej Stachowiak wrote:
>> I agree with Anne. I think we should remove the U+XXXX format entirely.
>> If you have a string like Q, you can convert it to a unicode numeric
>> value for range checking like this:
>> var codePoint = evt.keyIdentifier.charCodeAt(0);
> If I haven't mistaken,
> charCodeAt(0) isn't quite enough. It returns values between
> 0-65536. One needs to check also charCodeAt(1).
> A helper method to get the codepoint easily in all cases could be useful.

Yes, I'd of the same opinion, based on this MDC article:

There are some other aspects of it that I'm looking into, but in general 
I like Maciej's suggestion of using the format "\u...." instead of 
"U+....".  I asked PLH about the matter, and he doesn't recall a 
specific reason they originally used "U+....", so there's no historical 
necessity to keep it that way.

Also, because key identifiers are comprised of key names as well as 
Unicode code points, an author would have to do another check in some 
circumstances.  Given the possible key identifiers "\u0053" ("S") and 
"Shift", they might want to check the length of the string to determine 
which key it is before using charCodeAt:

  "\u0053".charCodeAt(0) == "S" , "Shift".length == 1
  "Shift".charCodeAt(0) == "S" , "Shift".length == 5

If we do make a helper method, maybe it could distinguish between code 
points and key names... just a vague thought right now.

-Doug Schepers
W3C Team Contact, SVG and WebApps WGs

Received on Wednesday, 23 September 2009 19:47:56 UTC