- From: Tab Atkins Jr. <jackalmage@gmail.com>
- Date: Thu, 19 Feb 2009 11:26:19 -0600
- To: Philip TAYLOR <P.Taylor@rhul.ac.uk>
- Cc: Daniel Glazman <daniel.glazman@disruptive-innovations.com>, Adam Twardoch <list.adam@twardoch.com>, Philip TAYLOR <Philip-and-LeKhanh@royal-tunbridge-wells.org>, www-style@w3.org
On Thu, Feb 19, 2009 at 10:53 AM, Philip TAYLOR <P.Taylor@rhul.ac.uk> wrote: > With respect, Daniel, "reality" requires recognising > the fact that most of the world does not speak English > (in any of its variants) as a first language, and that > we who are in the privileged position of being able > to discuss standards for future software specifications > have a duty not only to recognise this fact but to > build in this recognition into specifications currently > under discussion. You don't need to know English to learn a limited set of language tokens written more-or-less in English. It does help, of course. On the other hand, localizing your tokens automatically cuts you off from the vast majority of code in the wild. If you're a French speaker who knows no English, and you learn French-token CSS, you can't use *any* of the vast, vast quantities of CSS help on the web. You can't copy-paste code (unless the mapping preserves the original English tokens as well - hope there's no conflicts, especially if you have people from multiple languages working together!). You can't even *read* code written by the majority of the world (and they can't read yours). As much as humanly/technically possible, we of course want to support the diversity of languages on our planet. But in some cases it is advantageous *to the speakers of non-English languages* to purposely ignore their language, and use the dominant one (English, currently). Programming language tokens are one such area. ~TJ
Received on Thursday, 19 February 2009 17:26:56 UTC