Model testing - body and targets and suggestions for testing UI

New data model tests (i.e., checking body and target constraints) are have
been added to the annotation-level tests already available on the testdev
server. These are ready to be migrated up to production, but as mentioned
last night we need to get rid of old, deprecated tests and json files. I
also have some feedback on the test script UI that would be nice to address
if not too difficult (see below). Where we are with the data model tests:
Mandatory tests - all annotations should pass all assertions checked by
these 3 tests:
1. Annotation-level:
-manual.html  (14 assertions) 
2. Body-level:
ual.html  (16 assertions)
3. Target-level:
anual.html  (15 assertions) 
Recommended / Optional tests - most individual annotations will "fail" a
majority of these assertions:
1. Annotation-level:
nals-manual.html  (15 assertions)
2. Annotation-level Agents:
Optionals-manual.html  (16 assertions)
3. Body-level:
-manual.html   (28 Assertions)
4. Target-level:
ls-manual.html  (25 assertions)
5. Body and Target-level Agents:
Optionals-manual.html  (16 assertions)
A. The text box for inputting JSON-LD annotation and the Check JSON button
need to appear on the form before the list of assertions being checked.  The
list of assertions is of variable length and can be quite long. Better when
you have to paste in your annotation 8 times that the input box be higher on
the form and always in the same spot.
B. Per discussion, the title of each assertion contains mark-down. This
shows up formatted on the input form, but in the summary of results, the
mark-down is not being processed. Any way to fix?
C. When a test 'fails' the error message from the assertion is displayed
first (good), but then the AJV error message and test script trace is
concatenated, not useful for the person submitting the annotations for
testing. Is there any way to suppress (e.g., in hidden HTML) the AJV and
test harness trace error messages, or to format to be less prominent?   
D. Most annotations will fail most recommended / optional assertions. These
are intended to identify what features have been implemented in any given
annotation and do not really go to validation per se, and relatively few
individual annotations implemented more than a handful of optional features.
Example 44 from our model only pass 3 out of 25 target 'tests'.  Is there
any option on should and may assertions to tone down the red small caps FAIL
error message? Something in yellow and maybe a different word than FAIL?
Alternatively, we may need to discuss granularity further - separate post
Any changes to help ameliorate any of these issues that are easy to fix
before moving latest into production would be appreciated.
Tim Cole

Received on Thursday, 1 September 2016 14:57:31 UTC