Re: test cases -> running tests

* olivier Thereaux wrote:
>Bjoern has done a pretty good job of laying out the basis of what we 
>could run for an automated test suite of the validator, but we still 
>need a mechanism to link or include the test cases themselves into this 
>test system.

With the outlined framework there would not really be "test cases" but
rather "tests" where is "test" is generally Perl code, e.g.

  my $p = SGML::Parser::OpenSP->new;
  isa_ok($p, 'SGML::Parser::OpenSP');

isa_ok(...) is a "test" as it tests for a certain condition. I would not
manage this "test" in a RDF triple store, for example, but rather have a
test script with exactly that code. The design depends on the conditions
to be tested. For example, had a
bug at some point where the document had errors, a regression test to
prevent this error from re-occuring would require to retrieve why.html
from the web server, so the "test case" (the test input) would be auto-
generated when running the test, it does not exist per se.

The same goes for a test whether error messages include verbose messages
for which the test input would also be dynamically generated (and there
might be more input data to generate the test input, i.e. some Validator
input document that has errors).

An example for such input data would be the following tests (which are
generally bad ones as if the second test fails, it is probably a problem
with OpenSP rather than SGML::Parser::OpenSP as OpenSP controls the
default behavior but lets ignore that for a second).

  sub TestHandler2::new         { bless{ok1=>0},shift }
  sub TestHandler2::comment     { $_->{ok1}-- }
  my $h2 = TestHandler2->new;
  isa_ok($h2, 'TestHandler2');
  lives_ok { $p->parse("<LITERAL><no-doctype><!--...--></no-doctype>") }
    'comments not reported by default';
  is($h2->{ok1}, 0, 'comments not default');

This includes the input data directly in the code, no meta data, no
organization, no management, no re-use, ... so it would seem you would
consider this much worse than the worst of your proposed solutions. My
solution here is very cheap, most of the code is copied and pasted and
I did not have to change the Manifest files, add additional code to
locate the document using File::Spec, etc. There does not seem to be
much if any benefit arising from a more sophisticated solution, so if
you'd file a feature request to change the test I would likely reject
it :-)

With re-use, other factors are also to be considered. For example, if
we add full support for XML 1.0/1.1, XML Namespaces, etc. either by a
filter on top of SGML::Parser::OpenSP or through using a different
XML processor, we would mostly rely on these tools to work properly
and if we want more testing on top of that we would probably re-use
existing test suites such as <>, there would
be no point in duplicating these efforts. What we might do is to write
a driver that takes the up-to-date test suite and tests the Validator
but no more than that.

There are many other things to consider here, for example it makes a
difference whether the Validator reads an input document from the
local file system or fetches it via HTTP, FTP, etc. as FTP and file
systems do usually not have Content-Type headers while HTTP responses
often do. A test whether encoding detection properly considers a
charset parameter in the content-type header would not work from an
input file alone, for example.

>We have quite a lot of flexibility in this regard, given how we have 
>not yet made any decision on how the test cases would be managed and 

So, in essence, without knowing which kind of "tests" you are thinking
about here, it would not be possible to make a decision as requirements
greatly vary here.

Received on Thursday, 21 October 2004 15:21:53 UTC