RE: [Testing] Alternative approach to the test manifest structure

(Rob I started this before I saw your reply to Shane… but I think comments still relevant.) 
 
I think one of the goals is to maximize adaptability for validation, i.e., in addition to making it possible for us to report successful implementation of the annotation data model features as required to exit CR.  I'm still having some difficulty getting runner to run in my local clone of the WPT repository, but how we do what Rob suggests will have some impact on how the schemas are designed. I've been playing around with variant schemas for several days now, but have delayed putting these up in web-annotation-tests repository in anticipation of need to clarify this (and while trying to understand fully the test script logic and capability).
 
Fundamentally the approach Rob is proposing sounds fine to me, but it is a bit of change there are some implications we should work through – sounds like a primary topic for Friday's WG call. Here are some of my observations:
 
1. At what granularity are results reported for CR testing? So for example, if I have an Annotation with 1 External Web Resource target and 3 bodies, two of which are links in an array (via the body key) to External Web Resources (i.e., strings of format uri in an array) and one of which (using the bodyValue key) is a String Body, testing could pass the first 2 as valid and the third as invalid (since if bodyValue key is present as part of an annotation, the spec does not allow the body key to also be present in the same annotation). At least this would be one plausible way to report this. The Annotation as a whole would be invalid, but would we be more granular in reporting results – e.g., if everything else were correct, would we report that the client successfully implemented features like External Web Resource target, @context, Annotation id, External Web Resource body, etc., but failed in attempting to implement the String Body feature?
 
2. How granular do we need to make our schemas to help support adaptability for validation? For example, we have a schema we've created which fails if either of 2 conditions are true. 1) It fails if an annotation uses both the body and the bodyValue key. 2) it fails if the value of a bodyValue key is anything other than a single string (i.e., if it is an object or an array). So do I need to break this schema into 2 schemas to check (and report) these errors separately? Along the same line, do we continue testing after an invalid value / structure is encountered?
 
3. Lastly, Rob's example does get more complicated quickly due to the fact that both body and target (and many other of our defined objects / classes) can take multiple values or have alternatives. Like Shane, I think the test script logic can handle, but need to verify. So to rewrite Rob's example slightly:
 
* Test that bodyValue key exists
    -- OnSuccess:
        * Test that value of bodyValue is a single string (not object, not array – or should an array of a single string item be allowed?)
         -- OnSuccess:
              * Test that the body key is not also present
                  OnFail: Report Error (body and bodyValue keys are not both allowed on a single Annotation)
                  OnSuccess: Report bodyValue feature successfully implemented & go on to next set of tests
         -- OnFail: Report Error (bodyValue can only be a single string)  
 
  -- OnFail:  
      * Test that body key exists
           -- OnFail: Warn (body is a SHOULD)
           -- OnSuccess:  
               Determine type of body
                  * Test for array
                      --OnSuccess:
                          [Test each item as below – using JSON schema we can only report kinds of items / errors found.]
                      --OnFail:
                     * Test for string? 
                         --OnSuccess: 
                            * Test for format uri?
                            --OnSuccess: Report External Web Resource body feature successfully implemented & go on to next set of tests
                            --OnFail: Report Error (body cannot be a non-uri format string – use bodyValue to embed simple string). 
                         --OnFail: 
                         * Test for object?
                             --  OnSuccess:  
                                 * Test for Specific Resource? 
                                     -- OnSuccess: 
                                         [Descend into SpecificResource tests]
                                     -- OnFail:
                                         * Test for TextualBody?
                                         -- OnSuccess: 
                                               [Descend into TextualBody test]
                                           -- OnFail: 
                                               * Test for Choice, List, Composite, Independents (do we need to separate these?)
                                                -- OnSuccess:
                                                     [Descend into Choice, List, Composite, Independents checks]
                                                -- OnFail: Report Error (body not of any allowed class)
                                 
Or something like this (I may have missed a scenario). I think this illustration is a little more comprehensive for the scenario of testing body and better illustrates the complexity for how we write test scripts and illustrates how granular schemas will need to be. Makes for very simple but very granular schemas and more complicated less granular test scripts (unless of course one test scripts can be invoked from another – modularity?).
 
Thanks,
 
Tim Cole 
 
 
From: Robert Sanderson [mailto:azaroth42@gmail.com] 
Sent: Wednesday, July 13, 2016 10:22 AM
To: Shane McCarron <shane@spec-ops.io>
Cc: Web Annotation <public-annotation@w3.org>
Subject: Re: [Testing] Alternative approach to the test manifest structure
 
Thanks Shane, I didn't find the OR syntax for the test (as opposed to within the schema itself).
 
Could you (or someone) give an example of how the structured tests might work, as I must have missed it in the docs and current set of tests?  In English, what I want to do is:
 
* Test whether there's a body property or not
  * If there is, test if it's a JSON string.
    * If it is, test that it's a URI
  * If there is, test if it's a JSON object.
    * If it is, test if it's a TextualBody
      * If it is, test whether there's a value property or not
        * If there is, test that it's a string
      * ...
    * If it is, test if it's a SpecificResource
      * ...
* Test whether there's a bodyValue property or not
  * If there is, test if it's a JSON string
* Otherwise raise a warning that there's no body
 
Where each of those is a separate schema, so they can be reused (e.g. value is used on many sorts of resources)
 
Many thanks!
 
Rob
 
 
On Tue, Jul 12, 2016 at 4:13 PM, Shane McCarron <shane@spec-ops.io <mailto:shane@spec-ops.io> > wrote:
Umm... I am not clear what problem you are trying to solve.  Regardless, you can do what you express with the current syntax (which is not a manifest).  Each .test file expresses one or more assertions that will be evaluated.  You can have "or" clauses in the declarative syntax so you can this OR this OR this OR this need to be true in order to satisfy the requirements of the test.
 
On Tue, Jul 12, 2016 at 6:01 PM, Robert Sanderson <azaroth42@gmail.com <mailto:azaroth42@gmail.com> > wrote:
 
All,
 
After playing around with the schemas over the weekend, with a view to integrating them into my server implementation to validate the incoming annotations, I ran into some issues:
 
* The tests are designed for humans to read the error message, not for machines to process the results ... some tests are okay to fail validation, some aren't.
 
* There doesn't seem to be a way to descend into the referenced resources automatically.  You need to run the specific resource tests against the specific resource by hand.
 
* The processing patterns of the single with break or skip seems like it could be extended ...
 
 
So what do people think about the following, if it's not too late to change things:
 
* Continue with atomic tests for presence of a property, and then a separate one for the value of it
* But do that in the framework testing "manifest" by testing for failure/success of the validation.
 
For example, an automated system could descend into a SpecificResource as the body by:
 
* Test that body exists
  -- OnFail:  Warn (body is a SHOULD)
  -- OnSuccess:  
      * Determine type of body
          -- uri?  OnSuccess:  Test URI-ness & goto next set of tests
                      OnFail: SpecificResource?
                              OnSuccess:  Descend into SpecificResource tests
                              OnFail: TextualBody?
 
And so forth. 
The success/fail would be by the same $ref approach as the schema includes, and offset from the schema itself, so it can be reused in different parts of the overall set of tests.
 
This would let us compose features at whatever level we think is appropriate for reporting, and give a good validation suite for the model that can be used completely programmatically by implementing the success/fail runner.
[I did this runner already in python, it's a pretty easy piece of code, as one might expect]
 
Thoughts?
 
Rob
 
-- 
Rob Sanderson
Semantic Architect
The Getty Trust
Los Angeles, CA 90049



 
-- 
Shane McCarron
Projects Manager, Spec-Ops



 
-- 
Rob Sanderson
Semantic Architect
The Getty Trust
Los Angeles, CA 90049

Received on Wednesday, 13 July 2016 15:59:57 UTC