Re: Workflows for localizing RDF (Fwd: Fwd: "Organization Ontology" Japanese translation available)

Hi Dave & Elena, all,

sorry for the late follow up. Thanks a lot to Elena for the helpful 
comments. With regards to the "external lexicon" approach I am not sure 
if a lexicon format is appropriate: from my understanding in 
localization workflows the string externalization is temporaily and 
specific to the original content. So that would mean potentially one 
lexicon per file. Also, a lexicon may not have the right workflow 
information you would have in a localization format, e.g.: has the 
translation been reviewed and by whom etc. Did you take such (provenance 
/ quality / other translation process related) information into account 
in Monnet?

I agree with Elena that in both cases the context would be helpful. The 
XLIFF skeleton file can provide that, with the drawback to depend on one 
specific serialzation: XLIFF.

About the " rdfs:dummy_for_its_1" namespace: this is needed to allow for 
the roundtripping with XSLT - it is tool specific and you can ignore it.

About "But do we get issues when using this data type  (or any non datatype) when 
also using language tags on the literal? :": Dave is right, the HTML 
data type would not allow for using the language tag. You only could use 
it in the HTML content, that is no query with SPARQL.

Your feedback was quite useful - my main point is: do we want to write 
all this down in easy to understand best practices? Dave had asked a 
similar question, I think.


Am 10.02.14 01:21, schrieb
> Hi Felix,
> Couple of comment inline:
> On 07/02/2014 11:39, Felix Sasaki wrote:
>>> that makes sense - but do we need to have a special literal type to 
>>> indicate that it should be parsed for 'inline' tags? 
>> See above - the HTML literal
>> should do the job.
> But do we get issues when using this data type  (or any non 
> | when 
> also using language tags on the literal? :
> |
>>> Also in some cases, for example if the span had its-term--into-ref 
>>> pointing to a term definitions elsewhere in the linked data cloud, 
>>> best practice might be to reform (i.e. extract) the literal into a 
>>> NIF subgraph, with the annotated sub-string as separate nif:string 
>>> objects.
>> Not sure if for generating an XLIFF file (see above) you would a NIF 
>> subgraph. The main motivation for my BP proposal was: allow people 
>> working with localization tools (= processing XLIFF files) to 
>> translate labels in linke data.
>> So all the below makes sense IMO for textual content, extracted from 
>> HTML / XML etc. But processing the labels in linked data with NIF? 
>> Not sure if that is needed and might even hinder XLIFF based using 
>> localization workflows.
> Agreed, getting the annotation to work with XLIFF/ITS in a way that 
> can used used in exisitng tools should be the primary aim here.
> The use of NIF is more relevant if you wanted to make the content 
> available to NLP tools that could understand NIF - which is a 
> different use case.
> cheers,
> Dave
>> Disclaimer: really nothing against NIF ;) My point is only about the 
>> right approach for label translation.
>> Best,
>> Felix

Received on Tuesday, 11 February 2014 15:56:53 UTC