- From: Marco Fossati <fossati@fbk.eu>
- Date: Wed, 2 Sep 2015 20:29:41 +0200
- To: info@iaoa.org, semantic-web@w3.org, public-lod@w3.org, public-ontolex@w3.org, CHI-ANNOUNCEMENTS@listserv.acm.org, aisworld@lists.aisnet.org, planetkr@kr.org, Community@sti2.org, semanticweb@yahoogroups.com, linguist@linguistlist.org, dbpedia-discussion@lists.sourceforge.net, dbpedia-developers@lists.sourceforge.net, public-ldp@w3.org, semantic_web_doktorandennetzwerk@lists.spline.inf.fu-berlin.de, public-vocabs@w3.org, dl@dl.kr.org, spaghettiopendata@googlegroups.com
[Begging pardon if you read this multiple times] The Italian DBpedia chapter, on behalf of the whole DBpedia Association, is thrilled to announce the release of new datasets extracted from Wikipedia text. This is the outcome of an outstanding Google Summer of Code 2015 project, which implements NLP techniques to acquire structured facts from a textual corpus. The approach has been tested on the soccer use case, with the Italian Wikipedia as input. The datasets are publicly available at: http://it.dbpedia.org/downloads/fact-extraction/ and loaded into the SPARQL endpoint at: http://it.dbpedia.org/sparql You can check out this article for more details: http://it.dbpedia.org/2015/09/meno-chiacchiere-piu-fatti-una-marea-di-nuovi-dati-estratti-dal-testo-di-wikipedia/?lang=en If you feel adventurous, you can fork the codebase at: https://github.com/dbpedia/fact-extractor Get in touch with Marco at fossati@fbk.eu for everything else. Best regards, Marco Fossati
Received on Wednesday, 2 September 2015 18:30:13 UTC