W3C home > Mailing lists > Public > www-international@w3.org > October to December 1998

Re: Transliteration

From: Albert Lunde <Albert-Lunde@nwu.edu>
Date: Tue, 20 Oct 1998 07:38:24 -0500
Message-Id: <v03110700b2522f1f9609@[129.105.186.114]>
To: www-international@w3.org
>In the original discussion, there was a tendency to have the second
>language. Personally, I prefer *not* to have the second language.
>
>It would be:
>  - Simpler syntactically
>  - *Less* confusing (it is cleat the is Greek and nothing else)
>  - The information would be carried by the scheme
>
>>> 3) The reason for proposing the extension
>>>     of RFC-1766 is because:
>>>
>>>      3.1) It does *not*  break RFC-1766.
>
>>If you regard the data as a "dialect" of the original language,
>
>This is a good way to view it.

I still think that these efforts to describe with "language", what seems to
me, to be a case of a general transformation of various things including
script and language, are a bad idea.

It's _not_ dialect either. I have a book about a dialect of Japanese; it
uses pretty much the same romanization schemes as a dozen other books on
the Tokyo "standard" Japanese.

I'm also wondering where the names for, say, scripts or transformation
schemes will come from, and how they will be registered. If there's some
existing ISO work to reference, that would be nice.

Does IANA want to get involved registering terminology that's grown-up
ad-hoc in lingustics? I'm thinking of things like romanization schemes:
Hepburn, Kunrei, Nippon (for Japanese), McCune-Reischauer (for Korean).

As I said before, I think there's some prior art in the work of the Text
Encoding Inititative, but I don't know if they have a complete
classification scheme for this sort of thing.

---
    Albert Lunde                      Albert-Lunde@nwu.edu
Received on Tuesday, 20 October 1998 08:37:47 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Tuesday, 2 June 2009 19:16:53 GMT