- From: Richard Newman <r.newman@reading.ac.uk>
- Date: Fri, 1 Dec 2006 09:27:59 -0800
- To: "Chris Bizer" <chris@bizer.de>
- Cc: "'Karl Dubost'" <karl@w3.org>, "'Damian Steer'" <damian.steer@hp.com>, <semantic-web@w3.org>
Systemone have Wikipedia dumped monthly into RDF: http://labs.systemone.at/wikipedia3 A public SPARQL endpoint is on their roadmap, but it's only 47 million triples, so you should be able to load it in a few minutes on your machine and run queries locally. -R On 1 Dec 2006, at 4:30 AM, Chris Bizer wrote: >> I wish that wikipedia had a fully exportable database >> http://en.wikipedia.org/wiki/Lists_of_films >> >> For example, being able to export all data of this movie as RDF, >> maybe a templating issue at least for the box on the right. >> http://en.wikipedia.org/wiki/2046_%28film%29 > > Should be an easy job for a SIMILE like screen scraper. > > If you start scraping down from the Wikipedia film list, you should > get a > fair amount of data. > > To all the Semantic Wiki guys: Has anybody already done something > like this? > Are there SPARQL end-points/repositories for Wikipedia-scraped data?
Received on Friday, 1 December 2006 17:28:12 UTC