- From: Robin Berjon <robin@w3.org>
- Date: Thu, 02 Oct 2014 12:10:08 +0200
- To: Tobie Langel <tobie.langel@gmail.com>, Shane McCarron <shane@aptest.com>
- CC: "spec-prod@w3.org Prod" <spec-prod@w3.org>
On 02/10/2014 10:10 , Tobie Langel wrote: > My plan for this solution is to do daily crawling of relevant specs and > extract the dfn and put them in a DB. Further refinements could include > a search API, like I added for Specref and exposed within Respec. Could you somehow reuse or modify what Shepherd does here? If it includes enough information (or additional extraction can be easily added) and new specs can be added to its crawling (which I suspect ought to be relatively easy — I recall Peter's code being able to process quite a lot of different documents) then we can all align, which I reckon is a win (even without counting the saved cycles). Shepherd exposes an API that allows you to just simply dump the data it has. If you look inside update.py in Bikeshed you can see how it works. What Bikeshed does is, instead of querying services live, allow the user to regularly call bikeshed update and get a fresh DB (of a bunch of stuff). The same could be injected into SpecRef. > My focus will be on the gathering the data and providing a JSON API. Not > on actual implementation within ReSpec (which I won't have cycles for at > that time, I'm afraid). The hard part is getting the data. Hooking it into ReSpec oughtn't be difficult, unless I'm missing something. -- Robin Berjon - http://berjon.com/ - @robinberjon
Received on Thursday, 2 October 2014 10:10:17 UTC