RE: RSS 1.0: problems with feed, validator, CPAN module or specification?

Hi Olivier
   Just returning to the general QA issue:

> What do *you* think went wrong? I'm thinking everything went rather
> well: a small bug in the implementation of a slightly faulty 
> specification was found, reported, fixed, and added to a 
> regression test suite. Hopefully the small problems in the 
> spec will also get fixed soon.

Yes, I reported a small bug which was quickly fixed.  That's fine - but, to
me, that is quality *control* and not quality *assurance*.
 
When I got involved with a Web QA Project (see
http://www.ukoln.ac.uk/qa-focus/) we decided that QA required documented
*policies* (e.g. "Web site should be XHTML compliant") whilst pragmatic
("with the exception of HTML generated by 3rd party tools over which we had
no control") and this needed to be complemented by systematic *procedures*
which ensured the policies were being implemented (e.g. "author to validate
after update, with periodic batch checks and use of W3C's Web log
validator") with documented audit trails to spot deviations from policies
(e.g. "new person appointed who needed training") and documentation of the
lessons learnt ("validator gave inmcorrect results" :-) - actually this can
be the case with validators which don't spot character encoding problems,
for example).

So when I asked about your QA related to the fixing of this bug, I was
really asking about your QA policies and the lesson learnt.

Assuming you have a similar understaning of QA to mine, from your comments
and reverse engineering your implied QA policy and procedures it seems that
(excluding the QA I'm sure you went through when W3C first installed the
software) the policy is that the user community will be expected to spot and
report such bugs (as I did).

If that is the case then perhaps the lessons would be to be more open about
the possible limitations of validators (and software in general) - as you
did in the QA blog; to provide clearer indicators of reporting channels and
to document the likelihood of errors (e.g. listing untested modules).

Note that this discussion (and related blog postings) appear to have helped
in surfacing these issues:
"I guess we should really submit a PRISM test case. And yes, the Validator
is somewhat buggy as some recent testing confirms. On which more later." -
http://www.crossref.org/CrossTech/2007/02/rss_validator_in_the_spotlight.htm
l

Cheers

Brian


> > note a colleague who is a software developer felt that most 
> developers 
> > wouldn't have such a faith in validators as I do - but if you can't 
> > trust the validators, what's the point of validation?
> 
> Maybe faith is better left for ideas, religions and such 
> immaterial things. Validators are useful tools, but still 
> tools, worldly and imperfect. A bug in a dark corner of their 
> code does not change the fact that validators are massively 
> useful for people to adopt technologies - especially when 
> said bug gets squashed within 24 hours of being reported.
> 
> > http://ukwebfocus.wordpress.com/2007/02/07/validators-dont-always-
> > work/
> >
> > Comments welcome.
> 
> Seeing as everyone is commenting on weblogs...
> http://www.w3.org/QA/2007/02/bugs_and_qa.html
> 
> Cheers,
> --
> olivier
> 
> 

Received on Monday, 12 February 2007 16:05:10 UTC