W3C home > Mailing lists > Public > public-webplatform@w3.org > January 2014

RE: Converting MDN Compat Data to JSON

From: David Kirstein <frozenice@frozenice.de>
Date: Sun, 12 Jan 2014 22:55:47 +0100
To: "'Renoir Boulanger'" <renoir@w3.org>, "'Pat Tressel'" <ptressel@myuw.net>
Cc: "'Doug Schepers'" <schepers@w3.org>, "'WebPlatform Community'" <public-webplatform@w3.org>
Message-ID: <006c01cf0fe1$0dd67350$298359f0$@frozenice.de>
Hi Renoir,

 

I don’t know what this site/utility could do for us. Scraping data? We already read the HTML tables and get a reasonable JSON (surely needs improvement, but I doubt this site/utility would get better data out of it). Running scripts? We can do that on our own.

 

Let me know if I misunderstood something. :)

 

-fro

 

From: Renoir Boulanger [mailto:renoir@w3.org] 
Sent: Sonntag, 12. Januar 2014 22:10
To: Pat Tressel; David Kirstein
Cc: Doug Schepers; WebPlatform Community
Subject: Re: Converting MDN Compat Data to JSON

 

Hi Pat, David,

I'd like to point out some tool that might be useful to us.

A site/utility called ScraperWiki [0] could be used to host and run publicly import scripts. We do not need scraperwik to run them --we will most likely keep a github project for them anyway-- but can be used to run the scripts for us on a regular basis.

Anybody thought about it?

[0]: https://scraperwiki.com
--
Renoir Boulanger | Developer operations engineer
W3C | webplatform.org

http://w3.org/people/#renoirbhttps://renoirboulanger.com ✪ @renoirb
~





Received on Sunday, 12 January 2014 21:56:14 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:20:56 UTC