W3C home > Mailing lists > Public > public-rdf-dawg@w3.org > July to September 2007

Re: FW: test status and test service

From: Lee Feigenbaum <lee@thefigtrees.net>
Date: Mon, 30 Jul 2007 02:21:03 -0400
Message-ID: <46AD834F.60400@thefigtrees.net>
To: Ivan Mikhailov <imikhailov@openlinksw.com>
CC: 'RDF Data Access Working Group' <public-rdf-dawg@w3.org>

Hi Ivan,

Thanks for the update.

I wanted to let the group know that I've sent out information about our
test suite and implementation reporting to a collection of about 15-20
SPARQL implementors that I compiled. Already, I've heard back from
Richard Newman (twinql) asking if we'll be supplying this exact service
to test SPARQL endpoints, so it looks like it will be used, which is great.

Ivan, I'll slot an agenda item on Tuesday for an update on the service
and we can see where we stand.


[EricP: Sorry for the mistaken send to -request.]

Ivan Mikhailov wrote:
> Hi everyone,
> That's what I intended to discuss during the meeting :)
> LeeF proposed an online service that would take as input a URL to a (public)
> SPARQL endpoint and would fire the DAWG test suite at it and generate EARL
> results. Is that still something that you think you might be able to
> provide? I'm not sure how many implementations already have test harnesses
> that handle the DAWG test format, but for those that don't it would bt an
> invaluable service. (It wouldn't be feasible to run all the tests over the
> wire in real-time, so I'd imagine it would need to be a stateful service
> that gave back an ID such that an implementer could ask for results (by ID)
> later.
> That's what we're implementing now.
> The application handles following data:
> 1. Table of users. Every test run is made by a named user, but browsing of
> published results may be anonymous. Some users will be flagged as 'admins'
> that will be "more equal than others".
> 2. Table of test suites. There may be different sets of tests to run, at
> least different versions of DAWG testsuite.
> 3. Table of known web service endpoints. Every user may describe endpoints
> he tests (URL, description of engine etc.), these endpoints are by default
> invisible to other users but the 'owner' may explicitely make them 'public'
> and grant others to run tests on them.
> When a user chose web service to use, he will see a list ordered by
> ownership then by endpoint IRI: all endpoints owned by user then all public
> endpoints owned by 'admins' (and not listed above) then all other public
> endpoints.
> 4. Table of test runs. By default, result of any test run is private but a
> user that executed a test can make it public. Few descriptions are provided
> to the test. Initially a description of web service engine is copied from
> table of endpoints. A user who executed the test may add his own comments.
> If test run is made on public endpoint by user other than endpoint owner
> then the owner may add his $.0.02 in a separate comment.
> 5. Table of detailed results: output and the status of all test cases in all
> runs.
> Test data will be stored as read-public DAV resources writable by 'admins'.
> When SPARUL become more popular the application will be able to upload test
> data on a endpoint by request of the endpoint owner.
> The initial version of the application will be a bit ugly: plain HTML forms
> without AJAX and even without frames. It will be possible to call the
> service from stupid clients like wget, so test scripts will be able wget an
> URL of a test application and get some simple result.
> Best Regards,
> Ivan Mikhailov.
Received on Monday, 30 July 2007 06:21:16 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:00:51 UTC