Re: [Tidy-dev] Re: JTidy update

At 11:56 PM -0500 2/5/02, Russell Gold wrote:
>At 12:14 PM -0500 2/5/02, Charles Reitzel wrote:
>>Applying the 80/20 rule, you'd get a lot of mileage by just re-using the
>>test input, config and output files.  These are all available by
>>anonymous CVS.
>>
>>I'd go further, It would benefit all of us to keep a single test case
>>repository.  If anyone on the JTidy crew wants to add to the test cases,
>>let us know.  Worst case, just send the files to tidy-develop list and
>>I'll drop 'em in.  Ongoing, I don't see a problem adding you guys as
>>committers.
>
>Can you explain how the tests work? It looks as though you are passing the
>input files through Tidy and comparing them to something, but there are
>not nearly as many output files as input files...
>

Personally, I generally haven't been uploading output files.

Here are some comments I made to the "tidy-dev" mailing list a few days ago :

>By giving recently fixed bugs (or implemented enhancements) a distinct
>status, it gives us a test plan of sorts.  When checking in a bug fix, it
>is project practice to check in a test input file.  Test config and output
>files are good, too.  The script testone.sh in /tidy/test in CVS can be

I will always check in a config file when the default options are not used
(FYI, the "cfg_default.txt" config file is used by the test scripts for the
default case). I have never checked in an output file (probably because
until we started using "cfg_default.txt" we each may have used different
default options). Charlie and I have also discussed comparing error output
as well, but I probably wouldn't check in error output files.

>If you want to assess the impact of a change, it is relatively
>simple.  When using testall.sh, it places all output into
>/tidy/test/tmp.  To get a baseline, run testall.sh using a tidy executable
>from the project page.  Then, rename the directory /tidy/test/tmp to, for
>example, /tidy/test/baseline.  Finally, run testall.sh again with your
>updated executable.  A simple diff command on the two directories will
>quickly highlight any changes in output.  The whole process takes just a
>minute or two.

My process is a little different (mainly because of platform differences).
I have the ability to do diffs of the output, diffs of the error output,
and to optionally update the saved output/error output files for the next
run, all from my test script. I have a couple of special case conditions
for bugs involving "-slides" for example.

>A further complication is that JTidy does not simply correct input files.
>It also outputs an XML DOM - and the tests need to verify that as well.  I
>don't see how the Tidy tests do that.
>--

As Gary Peskin said, the Tidy test scripts don't do that. But I imagine you
guys could come up with a way to test that (like feed the DOM back through
JTidy to get output that could be compared, or something).

Regards, Terry

Received on Wednesday, 6 February 2002 03:34:04 UTC