Re: ISO 639 URIs

Hi Christian, hi all,

Wouldn’t it be nice if the domain was managed by a group of persons from the LLOD area to provide linked data on the languages that would be an aggregation of all the datasets you mentioned, along with all “sameAs” relations ?

I think of a semi-automatic process (a la DBnary) that will update its data from CSVs and other already available linked datasets every month or so and provide an always up to date registry ?

Moreover, the LOC linked data is quite poor compared to what lexvo had (for instance, the English language names “variants” are only available in English, French and German.

This solution will involve a dedicated team of maintainers (on the long run) and a rather small infrastructure to provide the data (which could be simply served from static files + content negotiation). It assumes that the generation of URIs and accompanying data can be made entirely automatically (which may not be the case if there are name clashes among these). It also assumes that the different dataset licences allows for it (which I am unsure regarding SIL…).

I also think that such an alternate dataset may be necessary for other persons who will need to have more information attached to the language they deal with (e.g. date annotations for Historical languages, geographical (space/time) annotation for all languages, etc.). 



> On 7 Jul 2020, at 18:40, Christian Chiarcos <> wrote:
> Dear all,
> for almost a decade, the Linguistic Linked Open Data community has largely relied on for providing LOD-compliant language identifier URIs, esp. with respect to ISO 639-3. Unfortunately, this got a out of sync with the official standard over the years (and when I tried to confirm this impression by checking one of the more recently created language tags, csp [Southern Ping Chinese], I found that lexvo was down).
> However, even if this is fixed, the synchronization issue will arise again, and as ISO 639 keeps developing (at a slow pace), I was wondering whether we should not consider a general shift from lexvo URIs to those provided by the official registration authorities.
> For ISO 693-1 and ISO 692-2, this is the Library of Congress, and they provide
> - a human-readable view:, resp. -- this is actually machine-readable, too: XHTML+RDFa!),
> - a machine-readable view (e.g.,,, and
> - content negotiation (,, working at least for application/rdf+xml)
> The problem here is ISO 693-3. The registration authority is SIL and they provide resolvable URIs, indeed, e.g., However, this is plain XHTML only, nothing machine-readable (in particular not the mapping to the other ISO 639 standards). On the positive side, their URIs seem to be stable, and also to preserve deprecated/retired codes (
> I'm wondering what people think. Basically, I see four alternatives to Lexvo URIs:
> - Work with current SIL URIs, even though these do not provide Linked Data.
> - Approach SIL to provide an RDF dump (if not anything more advanced) in addition to the HTML and TSV editions they currently provide.
> - Approach IANA about an RDF edition of the BCP47 subtag registry ( This contains a curated subset of ISO language tags and is supposed to be used in RDF anyway. [This has been suggested before:]
> - Approach the Datahub team to provide an RDF view on their CSV collection of language codes (, harvested from LoC and the IANA subtag registry, but regularly updated)
> What would be your preferences? Any other ideas? In any case, if we're going to reach out to SIL, IANA or Datahub, we should be able to demonstrate that this is a request from a broader community, because it would come with some effort for them.
> Best,
> Christian
> NB: Apologies for sending this to multiple mailing lists, but I think we should work towards a broader consensus for language resources in general here.

Received on Wednesday, 8 July 2020 09:46:58 UTC