Re: keeping "how to run GRDDL test cases" somewhat separate

Chimezie Ogbuji wrote:
> On Thu, 8 Mar 2007, Dan Connolly wrote:
>> In recent minutes, I see...
>>
>> "*ACTION:* Chime adding sentence to Test Case Doc specifying that 
>> local security policy must be set to none before running tests"
>>
>> I'm interested to see how that turns out; it's fine to note, in the 
>> section on notes
>> to implementors who might want to run the tests, that security policy
>> interacts with the ability to compute various GRDDL results. In 
>> particular,
>> I'd like us to work out the details of reporting, in EARL, "I didn't 
>> find
>> the GRDDL result in this test due to policy/configuration".
>
> What would be the trigger for this?
Various things; for example, in our GRDDL.py implementation, this code 
might trigger it:

        if self._zone and not addr.startswith(self._zone):
            raise IOError, "%s outside policy zone %s" % (addr, self._zone)


> Is the GRDDL-aware agent which the test harness drives expected to 
> report it's policy decisions?
Yes, since otherwise, people might misread the test results; they might 
think that an agent
failed a test when, in fact, it was just configured to not find the 
relevant result.
>   That is the only way I can imagine this would trickle down to the 
> EARL report generated from a test run.
>
>> So far, all the stuff about running the tests is incidental and 
>> informal.
>> I don't want to add any RFC2119 MUST style stuff about how to test a
>> GRDDL-aware agent.
>
> You can't test functional requirements authoritatively without 'some' 
> control over your environment.
There's nothing authoritative about the way tests are run. What's 
authoritative about the tests
is what they say about the language, not what they say about software. A 
GRDDL-aware agent
isn't required to be able to compare its output with a set of expected 
results, for one thing. The
whole test running harness is separate/separable from the tests.
>   This is especially the case with GRDDL as we have *many* factors 
> which introduce ambiguity: faithful-infoset considerations, security 
> policies, user interaction, and client 'capabilities'.  Some of which 
> have direct consequence on our GRDDL-aware agent compliance label.
I don't see a direct consequence. Perhaps you could elaborate, or sketch 
a test where
the consequence is observable?
>
> Without guidance on how to minimize the ambiguity you drastically reduce
> the usefulness of having a test harness for GRDDL in the first place.

That's not my experience.

>
>> In particular, I don't think we should advocate
>> making it *possible* to set the security policy of a GRDDL-aware
>> agent to "none".
>
> I don't see this as 'advocating' the use of toggling security 
> policies, but as guidance for implementors who sincerely wish to test 
> compliance WRT to the 'GRDDL-aware agent' label.

"guide" and "advocate" mean pretty much the same thing;
I don't want us to guide implementors into making it possible to turn 
all security features off.

>
> I consider our mention of security policies and their effect on what 
> GRDDL results are computed as an 'untested hook' until we have a 
> testable mechanism to demonstrate expected behavior irrespective of 
> security policies.

Yes, we should test those hooks; e.g. an extremely naive implementation
might be silly enough to look at <link rel="transformation" 
href="file:/sbin/shutdown" />
and actually run /sbin/shutdown; our spec advises against that:

"Some transformation language implementations may provide facilities for 
loading and executing other programming language code. [...] Designers 
of GRDDL transformations are advised against making use of such 
features. Besides being implementation-specific, they are more likely to 
be unavailable in secure implementations of the transformation language. 
The use of such operators in software executing GRDDL transformations 
should protect against such operators in case they are encountered."

Jeremy has a few tests like that in his jena/grddl project; I hope the 
WG can incorporate
those into our test materials in due course.
>
>> Maybe having the "how to run tests" stuff in the same document
>> is too confusing.
>
> I don't think so.  I think a paragraph will do and is best located in 
> the test document. Where else would you put information on how to run 
> tests other than in the document describing the tests themselves?

Are you assuming that the only purpose of the tests is to aid in 
development and evaluation of software?

One purpose of the tests is to clarify the specification of the 
language. The tests can serve
this purpose even if noone ever runs them.

On the other hand, there are documents and documents... our test 
materials are currently
split across 3 web pages; if you consider those 3 web pages to be one 
document in 3 parts,
I'm just thinking out loud about making it 4 parts.

-- 
Dan Connolly, W3C http://www.w3.org/People/Connolly/

Received on Tuesday, 13 March 2007 22:41:07 UTC