Hi Felix, Pedro, Tadej, all, >> 4) Would 1-2 be consumed by an MT tool, or by other tools? >> > These can be basically consumed by language processing tools, > like MT, and other Linguistic Technology that needs content or > semantic info. For instance Text Analytics, Semantic search, etc.. > In the localization chains, these information can be also used > by automatic or semiautomatic processes (like selection of > dictionaries for translations, or selection of translators/revisers > by subject area) FWIW: I've listed an Okapi 'step' as consumer for namedEntity and textAnalysisAnnotation: It would send data to process in HTML format to the producer (e.g. Enrycher), get back the annotated HTML, read the annotation and store them in its internal format, and then a next step could use the annotations to produce some type of output along with a translation kit: notes in an XLIFF document, look-up file, etc. something that could be used by translators for reference. Cheers, -ysReceived on Tuesday, 5 June 2012 03:53:45 UTC
This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 16:31:45 UTC