Re: AW: ANN: RDF Book Mashup - Integrating Web 2.0 data sources like Amazon and Google into the Semantic Web

… I should note, though, that it doesn't have article text, but it  
would provide a very useful framework for controlling a scraper to  
augment the data. Full-text data is also on their roadmap.

-R

On  1 Dec 2006, at 9:27 AM, Richard Newman wrote:

>
> Systemone have Wikipedia dumped monthly into RDF:
>
> http://labs.systemone.at/wikipedia3
>
> A public SPARQL endpoint is on their roadmap, but it's only 47  
> million triples, so you should be able to load it in a few minutes  
> on your machine and run queries locally.
>
> -R
>
>
> On  1 Dec 2006, at 4:30 AM, Chris Bizer wrote:
>
>>> I wish that wikipedia had a fully exportable database
>>> http://en.wikipedia.org/wiki/Lists_of_films
>>>
>>> For example, being able to export all data of this movie as RDF,
>>> maybe a templating issue at least for the box on the right.
>>> http://en.wikipedia.org/wiki/2046_%28film%29
>>
>> Should be an easy job for a SIMILE like screen scraper.
>>
>> If you start scraping down from the Wikipedia film list, you  
>> should get a
>> fair amount of data.
>>
>> To all the Semantic Wiki guys: Has anybody already done something  
>> like this?
>> Are there SPARQL end-points/repositories for Wikipedia-scraped data?
>
>

Received on Friday, 1 December 2006 17:45:58 UTC