RE: placeholder test files

Shane-
 
I'm confused again, especially about the distinction between assertion files / schemas (*.json) and tests (*.test) and how we want to use them – realizing that there are multiple ways to skin a cat, but wanting to make sure we settle collectively on the same way. We're probably in the same ball park, but your examples make me think I heading in the opposite direction from you.
 
So, Jacob, Janina, and I were going forward on the assumption that we needed individual assertion files (json schemas) for each of the properties defined in the Web Annotation model. Tests would reference these assertion files rather than embed schemas directly, and we would use the "onUnexpectedResult" : "failAndSkip", "onUnexpectedResult" : "failAndContinue", "onUnexpectedResult" : "succeedAndSkip", "onUnexpectedResult" : "suceedAndContinue" to manage the flow of assertions checked and would report error messages, etc.   The advantage of not embedding the schemas in the tests was that same keys are used on multiple objects defined in the model and so reusability should be facilitated.  
 
Toward that end we uploaded json schemas yesterday designed to detect  and validate (to a limited extent) format, language, processingLanguage, textDirection, id. The first four of these (variously should and may) can appear on Textual Bodies, External Web Resource descriptions that appear as Bodies, Targets, SpecificResource Sources, items in Choice, List, Composite, and Independents. id can appear on these and on additional objects. We also uploaded a few schemas that can be used as assertions in a test to detect a Textual Body (using presence of the value key in certain conditions [a must] rather than the type value, since the type value for Textual Bodies is only a Should), as well as schemas designed to recognize the use of Choice, List, Composite or Independents individually (but are working on more schemas for these and for Specific Resources).  There were previously schemas uploaded to check for presence and correctness of @context, id, "type": "Annotation", target(s), etc., though I'm not sure they're in the right place. Our assumption was that other schema files and tests would then be written to leverage these individual assertion files (i.e., schemas), e.g.,
 
Given a purported Annotation, a test for Embedded Textual Body (if selected) would proceed something like:
 
1. Reference an assertion file (schema) that checked for all annotation MUSTS (referencing the @context, id, … assertion files and providing error messages that enumerate failings agains Annotation MUSTS as appropriate)
     "onUnexpectedResult" : "failAndSkip" – json submitted does not meet MUST requirements to be an annotation, no further testing.
2. Reference a schema to detect body(ies) of the Annotation that satisfy the MUST requirement for Embedded Textual body. 
     "onUnexpectedResult" : "failAndSkip" – json submitted does not implement Embedded Textual Body feature of the model, no further testing of Embedded Textual Body features.
 
3. Check if the Embedded Textual Body has a language property, if yes, then the Annotation's use of Textual Body implements the language feature for that kind of body.
     "onUnexpectedResult" : "failAndContinue" – warn that language (a Should) was not implemented
 
4. Check if the Embedded Textual Body has a format property, if yes, then the Annotation's use of Textual Body implements the format feature.
     "onUnexpectedResult" : "failAndContinue" – warn that format (a Should) was not implemented
 
5. Check if the Embedded Textual Body has a processingLanguage property, if yes, then the Annotation's use of Textual Body implements the processingLanguage feature.
     "onUnexpectedResult" : "failAndContinue" – inform that processingLanguage (a May) was not implemented
 
6. Check if the Embedded Textual Body has a textDirection property, if yes, then the Annotation's use of Textual Body implements the textDirection feature.
     "onUnexpectedResult" : "failAndContinue" – inform that textDirection (a May) was not implemented
 
7. Check if the Embedded Textual Body has a id property, if yes, then the Annotation's use of Textual Body implements the id feature.
     "onUnexpectedResult" : "failAndContinue" – inform that id (a May) was not implemented
 
Similar logic for checking for External Web Resources as bodies, but a separate test to see if EWR were implemented as a target. EWRs can also be SpecificResource sources. Textual Bodies and EWRs can be items in a Choice, List, Composite or Independents arrays. 
 
The test logic gets longer for Bodies or Targets that are SpecificResources, Choice, List, Composite or Independents, but the basic flow looks to be similar. 
 
When body or target is an array of mixed body/target types, this approach can only report on items matching the pattern being checked, i.e., only the Embedded Textual Bodies in an array of mixed Textual Bodies and External Web Resources – so we might run into an issue of reporting that annotation implemented Textual Bodies (i.e., correctly) but also had bodies that did not match any valid bodies. To avoid this, presumably, just as we will check all annotations for Annotation MUST properties, we'll also check that all items in the body/target array satisfy requirements for one of the body / target classes. We already have a schema that pretty much does this. 
 
Anyway, does this make sense?  I'm a little leery of uploading more schemas if I've got the wrong end of the stick. I'm also concerned that it will be easier to 'call' if the check for Annotation MUSTs is a schema (assertion file) rather than a test file – but maybe not for reporting purposes?
 
Basically the difference from a week ago, is that we're not doing separate checks for the presence of a key and then for its correctness using 2 different schemas (Rob, is this acceptable), we're reporting on each key individually, and the only complete failure is if an Annotation fails MUST requirements, or if the feature being tested for is not detectable (either because it wasn't implemented or because it was implemented in a manner that MUSTs for that feature were not implemented. 
 
Let us know how best to proceed while not working at cross-purposes.
 
Thanks,
 
Tim Cole
 
 
From: Shane McCarron [mailto:shane@spec-ops.io] 
Sent: Wednesday, July 20, 2016 10:44 AM
To: Jacob Jett <jjett2@illinois.edu>
Cc: Cole, Timothy W <t-cole3@illinois.edu>; Maria Janina Sarol <mjsarol@illinois.edu>; W3C Public Annotation List <public-annotation@w3.org>; Discussions about Test Development <testdev@lists.spec-ops.io>
Subject: Re: placeholder test files
 
Further to this discussion and to help illustrate the point, I have added a couple of things to the WAT repo:
1.      annotations/requiredProperties.json is single assertion that ensures all the required properties are present.  It can be referenced from every other test to ensure the basic shape is available.
2.      bodiesTargets/3.2.4-textualBody.test shows the beginning of a test for textualBody.  It tests the basic requirements and needs to be extended with the rest of the optional properties using the same pattern as for that of the type property.
 
On Wed, Jul 20, 2016 at 9:25 AM, Shane McCarron <shane@spec-ops.io <mailto:shane@spec-ops.io> > wrote:
Thanks for this.  Note that, in parallel, I added what I consider complete tests for a few things to the WAT tree yesterday as well.  These might be massaged to use your updated definitions... I just did the definitions inline. 
 
My recommendation would be that you try to not make the tests too sequential...  I will be the first to admit that I don't have all of the context of the last N years of developing this spec, but I assume if there are multiple ways to express body (for example) then it is assumed that each of these is important. If that is the case, then:
1.      Have a separate test for each expression type (each feature)
2.      Make that test as simple as possible
3.      Don't worry about sequentially determining what to test.  You can test it all at once.
Such an architecture will make it very easy to identify which features are supported by multiple implementations.  For example, in the case of body and looking at your diagram, to test textualBody have an assertion that:
*  requires body
*  requires body be an object
*  require body have a property of value 
*  require body NOT have a property of source nor type nor id
*  require the value property be a string (?)
*  then CHECK but do not fail if the optional components are not present (no requirements): language, format, processingLanguage, textDirection
 
That's it.  Have a test like this for EACH form of body.  The instructions for the test say supply an annotation that is of the correct form for that type.  An implementation is tested by supplying that. Rinse, Repeat.
 
If you want to see an example of this, check out bodiesTargets/3.2.1-bodyValue.test.  That test simply asserts that there is a bodyValue that is a string, and that there is NOT a body property.
 
I do not personally think it is critical that each of these tests also ensure that all the required components of an annotation are present.  On the other hand, if they are not present the test should not pass.  If people feel it is critical those are included as well it is easy enough.  I put up a test that does just that in annotations/3.1-model-musts.test.  The contents of that could be pulled out into a single .json file and put in /common, then referenced from each test as the first assertion.
 
 
On Wed, Jul 20, 2016 at 8:42 AM, Jacob Jett <jjett2@illinois.edu <mailto:jjett2@illinois.edu> > wrote:
Hi Shane,
 
Tim, Janina, and I uploaded a number of new schema definitions for the various body objects yesterday. Our thinking is that the definitions would be used to test the body object and showcase exactly what the object is with respect to the Annotation vocabulary. 
 
We've also developed what we think is a good logical flow diagram for the test of the body object (see attached flow diagram). Although in retrospect, we probably want to test the target key first since, if it's absent then we already know that the annotation isn't valid or well-formed. We can reuse the attached flow diagram for target objects simply by eliminating the check for the value key (i.e., the TextualBody case) from the diagram.
 
Another thing to note about the uploaded definitions--we've adopted a standardized and simple scheme for titles and descriptions.
 
Tim will likely follow up this email later this afternoon with additional details. Will likely discuss some of this on Friday's call.
 
Regards,
 
Jacob
 
 
 


_____________________________________________________
Jacob Jett
Research Assistant
Center for Informatics Research in Science and Scholarship
School of Information Science
University of Illinois at Urbana-Champaign
501 E. Daniel Street, MC-493, Champaign, IL 61820-6211 USA
(217) 244-2164 <tel:%28217%29%20244-2164> 
jjett2@illinois.edu <mailto:jjett2@illinois.edu> 
 
On Sun, Jul 17, 2016 at 9:30 AM, Shane McCarron <shane@spec-ops.io <mailto:shane@spec-ops.io> > wrote:
So... it turns out that the prefix "stub-" is reserved in WPT.  I am not sure for what, but it is a special class of test files that can't currently be selected from the UI.  So when I create the empty test files that need populating, I am going to use the prefix "ph-" for placeholder.  
 
On a related note, the result of NOTRUN in WPT literally means that a test was created and then nothing was run about it.  I am going to have the ph-* tests do this so they show up in our reporting but.... I am sure that is not what NOTRUN was intended for and we might get some pushback from the powers that be.  Whoever they are.

 
-- 
Shane McCarron
Projects Manager, Spec-Ops
 



 
-- 
Shane McCarron
Projects Manager, Spec-Ops



 
-- 
Shane McCarron
Projects Manager, Spec-Ops

Received on Wednesday, 20 July 2016 21:38:25 UTC