W3C home > Mailing lists > Public > public-qa-dev@w3.org > October 2004

Re: Markup Validator Test Suite

From: olivier Thereaux <ot@w3.org>
Date: Fri, 15 Oct 2004 15:32:14 +0900
Message-Id: <F3FC0ABD-1E73-11D9-9B24-000A95E54002@w3.org>
To: QA Dev <public-qa-dev@w3.org>

On Oct 15, 2004, at 12:22, Bjoern Hoehrmann wrote:
> * olivier Thereaux wrote:
>> 6) [[Must evolve with the development of the validator, and be able to
>> test old versions as well as the "newest, latest, greatest]] is more 
>> of
>> a problem for a T::B-based test suite, because it would not be able to
>> test versions prior to the switch to the appropriate architecture.
> I am not sure which "switch" you mean here and this is either not the
> case here or not relevant to a Test::Builder based test module.

I was talking of the switch from the current routine-based "check" and 
a modular validator.
As far as I can tell, Test::Builder is more (only?) adapted to the 

> I am not sure which exactly as I do not really understand why it needs 
> to
> be able to test outdated versions

In an ideal world bugs would not exist. In a cheaper version of that, 
bugs would appear in one version, be spotted immediatly, fixed in CVS, 
the new code released, end of the story. But in the real world where 
you have trouble finding the origins of one bug, where bugs re-appear 
across versions, where bugs take time to be discovered, I think it's 
quite useful that one test could be ran not only on the latest dev 
version, but on the current release as well, and on the one before that 

> and how this would be implemented.

Well... that was my point, exactly, that a test suite embedded with the 
product itself would not provide such a feature. On the other hand, a 
more independent test system could be pointed to such or such instance 
of the code and compare results. That's what I did with my test 
catalogue, generating a list of documents to validate, give that to the 
logvalidator to be ran on 0-6 and head, and diff the results. It did 
not find any difference, because our test catalogue isn't very large 
and because most of the merge bugs are UI based, but had it found some, 
it would have saved us a lot of time.

> It seems that this might make some limited sense in some rare cases,
> but considering the effort required to make this useful it is nothing
> we should worry much about.

Perhaps it is not so important. I think it can be useful. But in any 
case I'd much rather worry about it now, at requirements phase, than in 
6 months.

> It would make more sense to require
> ("should") that the relevant test suite includes one or more tests
> for the specific item before the issue can be closed as fixed.

I agree.

> Human-testing only makes sense for things that would be difficult or
> impossible to test using machine-based testing, this would be pretty
> much limited to "does this work in all relevant browsers". Here I am
> only concerned about machine-based testing.

And I am concerned about testing in general, which is not incompatible.

Machine-based testing will obviously be a large part of the validator's 
testing, and I do like the ideas around Test::Builder, e.g as I wrote:
> This is not a bad idea, given that we're pretty committed to having a 
> platform in perl for a while anyway, and that Test::Builder is a good 
> mechanism to build and run test collections at the same time.
but that does not preclude me from keeping an eye on the bigger picture 
of test cases contribution/management/[human && machine] test 


Received on Friday, 15 October 2004 06:32:22 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 16:54:47 UTC