- From: james anderson <james@dydra.com>
- Date: Tue, 19 May 2015 09:13:26 +0000
- To: public-rdf-dawg-comments@w3.org
- Cc: tech@dydra.com
- Message-ID: <0000014d6b72f7ce-32831691-921b-41da-880b-9a72030979a5-000000@eu-west-1.amazonse>
good morning; thank you for your note. > On 2015-05-19, at 09:51, Gregory Williams <greg@evilfunhouse.com> wrote: > > On May 18, 2015, at 7:28 AM, james anderson <james@dydra.com> wrote: >> >>> I believe that the problem you’re referring to here is the difference between “” and “”@ja in the results files? >> >> yes, in one. in the other there is also a french language tag which would appear to be anomalous. > > Could you point specifically at the problem you’re seeing? I can’t seem to find it in either strbefore01a.srx or strafter01a.srx. The only “french language tag” I see relating to these tests is in strbefore01a.srx, and that looks valid to me as the STRBEFORE function finds a match and so "returns a literal of the same kind as the first argument” (in this case, “françai”@fr). > >> if i were to run straight from the net, i might be persuaded to agree with you. >> that practice suffers, however, from two deficiencies: >> - it is quite circumstantial, in that one cannot point to an object and indicate compliance with it, but can only say “hey, that’s what was being served on dddd-dd-dd@tt:tt:tt” >> - there have been innumerable occasions over the past days when w3c’s web front-end decided to no longer serve the content, which makes it difficult to run tests in that mode. sometimes for days. >> >> i could always wget and set up our own git repository, but having observed any number of those already in the wild - each of unknown provenance and with unknown content, that does not seem to be a well-considered approach. > > I wasn’t suggesting that you run tests directly against the network-served files. Only that you begin the process of running tests by parsing the manifest files and using that data to find approved tests (as opposed to running all tests of type mf:QueryEvaluationTest, for example, which might lead you to run a non-approved test). i guess i might have been confused by what’s in the served directory, v/s what’s in the tar archive, v/s the mf:QueryEvaluationTest complement present in the respective manifest, v/s the respective dawgt:approval status v/s the content of the manifest’s mf:entries list. my confusion. given my evidently limited capacity, combined with the eventual variations entailed by simple v/s typed strings, and by dydra's value v/s lexical semantics for domains such as dates and numbers, the approach, to set up a git repository does recommend itself.[1] we can leave the “current” w3c state in master and work through variations and simplifications in our own branch, with the goal to create a clear record of what we actually do and of any variations wit respect to the standard. in the process of constructing that repository with content consolidated from the 1.1 and 1.0 suites, i observe two cases, where the 1.1 test suite master manifest entry list does not appear to be conclusive: - the served entailment directory contains an index.html file. this masks the content. the archive contains no index file. for the moment, i took the served index file and the archive content. - the http-rdf-update directory reflects a different structure than the others. a reference appears among the entries, but there appears to be neither a manifest nor any other “standard-format” test information in the directory itself. best regards, from berlin — [1] : https://github.com/datagraph/w3c-dawg-test-cases <https://github.com/datagraph/w3c-dawg-test-cases> --- james anderson | james@dydra.com | http://dydra.com
Received on Tuesday, 19 May 2015 09:13:57 UTC