Might abstraction resolve the issue of cross-coding? To enable this, a data set of 'code equivalents' could be set up and made available at W3C. Such a data set might allow highly efficient language translation, even if not immediately allowing the most efficient coding in specific languages. It could enable cross-platform programming and integration to occur very much more easily and faster than otherwise possible.
With the adoption of an abstraction method whereby W3C specifications would be translated into a very specific and detailed meta-language, coding could be automatically generated according to translation specifications for particular programming languages. What appeals personally is that this could be made possible via browser/editor technology. Coding could become highly efficient, with, for example, expert organisations providing products and services that would allow further abstraction-based translation, tailoring performance for particular functions or platforms and even optimised for specific hardware.
My understanding is that coding languages differ in their flexibility, relevance and applicability, possibly in a way similar to differences between human languages. Perhaps people find one language more expressive than another, according to their linguistic needs, preferences and experience. But I am sure that many would agree that as people become able to express themselves in more languages, they become capable in more activities (not least diverse self-expression and communication).
The method could enable cross-platform programming and cross-platform software integration to occur very much more easily and faster than otherwise possible. This might, for example, be possible using Amaya plus a mailing list or web forms. For efficiency, conformity-enforcing forms might be best, with submissions being written to an online (inbound) database, to be checked by staff who would test, and at their discretion update the translation set (outbound) server database and/or the program abstraction itself. Abuse of web forms could be checked with a username and password, so that abusers of the system could be blocked.
It is acknowledged that where coding languages do not overlap in functionality (i.e. do not translate well), finding coding equivalences for the data set might become more difficult. However, I am sure that the use of such a method as outlined here, for cross-platform programming and cross-coding program integration, is worth development. It is further noted that a master abstraction would be required since, as anyone who has tried it knows, when a language is translated from one language to another and to another, original meaning can be easily lost, and the result can be problems and misinterpretations.
There may be further possible applications besides, such as that of a common human language data set (not equivalent to HumanML). It would not be so prescriptive as dictionaries, though these would be used to support the effort. The nuances of grammar, context and language could be codified into a workable tool. Such a human language data set would be huge, and forever changing. However, unlike the thoroughly commendable efforts of those who contribute to dmoz.org, with its human research based search engine, the human language data set would draw more upon dictionaries, encyclopedia and specialist publications, rather than solely the content of the whole World Wide Web, which is awash with noise with varying levels of cultural and factual relevance.
W3C promotes enabling technologies and approaches in such a way that has inspired and enthused me for a number of years. I wish to contribute further but find that my coding skills are not optimal at present to do this. The coding for software, such as Amaya, could be made available with each release as abstraction, deliverable online by code translation scripts (compiled client-side), or as server-side packaged data. I also believe that diversifying into different languages in this way could open avenues of possibility faster, allowing new ideas to emerge.
Perhaps W3C does something like already, but in a way unseen by the public? It would be interesting to learn more.
Note: Any errors, incorrect definitions or assumptions are my own and I apologise.