Re: [Testing] Alternative approach to the test manifest structure

The .test file syntax is defined at [1] (note that we are now living in the
formal web-platform-test repository; yay!). This syntax indicates at the
top level there is an "assertions" list.  This list can, of course, have
only a single entry - where that entry is another list, a URI, Assertion
Object, or a Condition Object (Condition Object is a specialization of an
Assertion Object). An example of an "or" test is contained in the document
referenced at [1] in the section on Condition Objects where it says:

  "assertions": [
    { "$schema": "http://json-schema.org/draft-04/schema#",
      "title": "must have context or id",
      "description": "A more complex example that allows one of many
options to pass",
      "assertions": [
      { "title": "Condition Object",
        "description": "A pseudo-test that will get a result from the
aggregate of its children",
        "assertionType": "must",
        "expectedResult": "valid",
        "errorMessage": "Error: None of the various options were present",
        "compareWith": "or",
        "assertions": [
          "common/has_context.json",
          "common/has_id.json"
        ]
      }
      ]
    }
    ]

In this example, there is a single "assertion" that has an "or" list of
assertions within it.  Such a list permits the "or-ing" or the results of
the embedded assertions - in this case that there is a context or there is
an id.

So, to get back to your request, you could that embedded "assertions" list
as :

"assertionType": "must",
"errorMessage": "The annotation had neither a well formed body nor a well
formed bodyValue property",
"compareWith": "or",
"assertions": [
  { "title": "Has Body",
    "description": "The annotation has a well formatted body property",
    "assertions": [
      "hasBody.json"
    ]
  },
  { "title": "Has BodyValue",
    "description": "The annotation has a well formatted bodyValue property",
    "assertions": [
      "hasBodyValue.json"
    ]
  }
]


So it would evaluate those two things in an OR context and, if neither
passed, report the errorMessage.

The external files hasBody,json and hasBodyValue.json would contain the
other logic:

hasBody.json:

{ "$schema": "http://json-schema.org/draft-04/schema#",
  "title": "is the Body property well formed",
  "description": "Does the body property contain either a URI, a JSON
Object that is a textual body, or a JSON Object that is a specific
property",
   "compareWith": "or",
   "assertions": [
     ...
    ]
}

You get the idea.  I can probably create a working example.  However, this
is a pretty complicated case.  Maybe we could start with something a little
simpler to demonstrate that it does what you want and then work our way up
to this?

[1]
https://github.com/w3c/web-platform-tests/blob/master/annotation-model/CONTRIBUTING.md

On Wed, Jul 13, 2016 at 10:21 AM, Robert Sanderson <azaroth42@gmail.com>
wrote:

> Thanks Shane, I didn't find the OR syntax for the test (as opposed to
> within the schema itself).
>
> Could you (or someone) give an example of how the structured tests might
> work, as I must have missed it in the docs and current set of tests?  In
> English, what I want to do is:
>
> * Test whether there's a body property or not
>   * If there is, test if it's a JSON string.
>     * If it is, test that it's a URI
>   * If there is, test if it's a JSON object.
>     * If it is, test if it's a TextualBody
>       * If it is, test whether there's a value property or not
>         * If there is, test that it's a string
>       * ...
>     * If it is, test if it's a SpecificResource
>       * ...
> * Test whether there's a bodyValue property or not
>   * If there is, test if it's a JSON string
> * Otherwise raise a warning that there's no body
>
> Where each of those is a separate schema, so they can be reused (e.g.
> value is used on many sorts of resources)
>
> Many thanks!
>
> Rob
>
>
> On Tue, Jul 12, 2016 at 4:13 PM, Shane McCarron <shane@spec-ops.io> wrote:
>
>> Umm... I am not clear what problem you are trying to solve.  Regardless,
>> you can do what you express with the current syntax (which is not a
>> manifest).  Each .test file expresses one or more assertions that will be
>> evaluated.  You can have "or" clauses in the declarative syntax so you can
>> this OR this OR this OR this need to be true in order to satisfy the
>> requirements of the test.
>>
>> On Tue, Jul 12, 2016 at 6:01 PM, Robert Sanderson <azaroth42@gmail.com>
>> wrote:
>>
>>>
>>> All,
>>>
>>> After playing around with the schemas over the weekend, with a view to
>>> integrating them into my server implementation to validate the incoming
>>> annotations, I ran into some issues:
>>>
>>> * The tests are designed for humans to read the error message, not for
>>> machines to process the results ... some tests are okay to fail validation,
>>> some aren't.
>>>
>>> * There doesn't seem to be a way to descend into the referenced
>>> resources automatically.  You need to run the specific resource tests
>>> against the specific resource by hand.
>>>
>>> * The processing patterns of the single with break or skip seems like it
>>> could be extended ...
>>>
>>>
>>> So what do people think about the following, if it's not too late to
>>> change things:
>>>
>>> * Continue with atomic tests for presence of a property, and then a
>>> separate one for the value of it
>>> * But do that in the framework testing "manifest" by testing for
>>> failure/success of the validation.
>>>
>>> For example, an automated system could descend into a SpecificResource
>>> as the body by:
>>>
>>> * Test that body exists
>>>   -- OnFail:  Warn (body is a SHOULD)
>>>   -- OnSuccess:
>>>       * Determine type of body
>>>           -- uri?  OnSuccess:  Test URI-ness & goto next set of tests
>>>                       OnFail: SpecificResource?
>>>                               OnSuccess:  Descend into SpecificResource
>>> tests
>>>                               OnFail: TextualBody?
>>>
>>> And so forth.
>>> The success/fail would be by the same $ref approach as the schema
>>> includes, and offset from the schema itself, so it can be reused in
>>> different parts of the overall set of tests.
>>>
>>> This would let us compose features at whatever level we think is
>>> appropriate for reporting, and give a good validation suite for the model
>>> that can be used completely programmatically by implementing the
>>> success/fail runner.
>>> [I did this runner already in python, it's a pretty easy piece of code,
>>> as one might expect]
>>>
>>> Thoughts?
>>>
>>> Rob
>>>
>>> --
>>> Rob Sanderson
>>> Semantic Architect
>>> The Getty Trust
>>> Los Angeles, CA 90049
>>>
>>
>>
>>
>> --
>> Shane McCarron
>> Projects Manager, Spec-Ops
>>
>
>
>
> --
> Rob Sanderson
> Semantic Architect
> The Getty Trust
> Los Angeles, CA 90049
>



-- 
Shane McCarron
Projects Manager, Spec-Ops

Received on Wednesday, 13 July 2016 18:43:07 UTC