Re: [Testing] Alternative approach to the test manifest structure

Umm... I am not clear what problem you are trying to solve.  Regardless,
you can do what you express with the current syntax (which is not a
manifest).  Each .test file expresses one or more assertions that will be
evaluated.  You can have "or" clauses in the declarative syntax so you can
this OR this OR this OR this need to be true in order to satisfy the
requirements of the test.

On Tue, Jul 12, 2016 at 6:01 PM, Robert Sanderson <azaroth42@gmail.com>
wrote:

>
> All,
>
> After playing around with the schemas over the weekend, with a view to
> integrating them into my server implementation to validate the incoming
> annotations, I ran into some issues:
>
> * The tests are designed for humans to read the error message, not for
> machines to process the results ... some tests are okay to fail validation,
> some aren't.
>
> * There doesn't seem to be a way to descend into the referenced resources
> automatically.  You need to run the specific resource tests against the
> specific resource by hand.
>
> * The processing patterns of the single with break or skip seems like it
> could be extended ...
>
>
> So what do people think about the following, if it's not too late to
> change things:
>
> * Continue with atomic tests for presence of a property, and then a
> separate one for the value of it
> * But do that in the framework testing "manifest" by testing for
> failure/success of the validation.
>
> For example, an automated system could descend into a SpecificResource as
> the body by:
>
> * Test that body exists
>   -- OnFail:  Warn (body is a SHOULD)
>   -- OnSuccess:
>       * Determine type of body
>           -- uri?  OnSuccess:  Test URI-ness & goto next set of tests
>                       OnFail: SpecificResource?
>                               OnSuccess:  Descend into SpecificResource
> tests
>                               OnFail: TextualBody?
>
> And so forth.
> The success/fail would be by the same $ref approach as the schema
> includes, and offset from the schema itself, so it can be reused in
> different parts of the overall set of tests.
>
> This would let us compose features at whatever level we think is
> appropriate for reporting, and give a good validation suite for the model
> that can be used completely programmatically by implementing the
> success/fail runner.
> [I did this runner already in python, it's a pretty easy piece of code, as
> one might expect]
>
> Thoughts?
>
> Rob
>
> --
> Rob Sanderson
> Semantic Architect
> The Getty Trust
> Los Angeles, CA 90049
>



-- 
Shane McCarron
Projects Manager, Spec-Ops

Received on Tuesday, 12 July 2016 23:14:06 UTC