- From: Daniel Dardailler <danield@w3.org>
- Date: Fri, 12 Apr 2002 15:25:13 +0200
- To: Vijay Sikka <vsikka@nirixa.com>
- cc: www-qa@w3.org
> Since the dawn of software development, we all have debated > the pros and cons of Manual Testing and Automated Testing. > > I want to discuss the features and benefits of each - please > contribute your ideas and I will be happy to post a response > which consolidates your feedback. > > a. What are the situations in which Automated Testing simply > does not work? testing high level semantics, like if the alt text description associated with a picture of an apple says "apple", and not "banana". or in the testing of the WAI Authoring tools guidelines, where testers are asked to check if the help system addresses accessibility topics and use accessible example. you may want to look at http://www.w3.org/QA/Taxonomy.html semi-automatic always helps in any case. > b. Could complete coverage of functional testing be achieved > by automated testing? depends on the nature of test, if what you test is the presence of function signature in a run time library, sure, you can achieve complete coverage. if you check validity of markup against a syntax, the same. but as soon as it gets to higher semantics expressed in human terms in the specs, a human is needed to check it, obviously. > c. Does manual testing take too much time and is impractical? subjective questions, so no answer (what is too much, what is impractical ?) > d. What are the best tools for automated testing? the object of the tests makes all the difference. if you want to test html validity of a page on the Web, whatever tool you have in mind right now is probably not going to beat our validator.w3.org :-) > e. Can we identify a least common denominator process > in manual testing so ROI can be increased? I don't think so.
Received on Friday, 12 April 2002 09:25:17 UTC