- From: Yves Savourel <ysavourel@enlaso.com>
- Date: Tue, 5 Jun 2012 05:53:15 +0200
- To: <public-multilingualweb-lt@w3.org>
Hi Felix, Pedro, Tadej, all, >> 4) Would 1-2 be consumed by an MT tool, or by other tools? >> > These can be basically consumed by language processing tools, > like MT, and other Linguistic Technology that needs content or > semantic info. For instance Text Analytics, Semantic search, etc.. > In the localization chains, these information can be also used > by automatic or semiautomatic processes (like selection of > dictionaries for translations, or selection of translators/revisers > by subject area) FWIW: I've listed an Okapi 'step' as consumer for namedEntity and textAnalysisAnnotation: It would send data to process in HTML format to the producer (e.g. Enrycher), get back the annotated HTML, read the annotation and store them in its internal format, and then a next step could use the annotations to produce some type of output along with a translation kit: notes in an XLIFF document, look-up file, etc. something that could be used by translators for reference. Cheers, -ys
Received on Tuesday, 5 June 2012 03:53:45 UTC