- From: Patrick Curran <Patrick.Curran@Sun.COM>
- Date: Mon, 22 Aug 2005 13:31:40 -0700
- To: "Lynne S. Rosenthal" <lsr@email.nist.gov>
- Cc: www-qa-wg@w3.org
This is good feedback - thanks, Lynn. I'll incorporate it into the next version. Lynne S. Rosenthal wrote: > The general consensus of the NIST review is that Test FAQ is a useful > document, well written, and in good shape. All liked the document. > Below are several comments and suggestions. > > > > --Lynne > > > > Introduction. > > Suggest switching first 2 paragraphs, so we say what the FAQ is about > prior to saying who it is for. (don’t feel too strongly about this). > > > > Change email address for feedback (currently W3C QA WG) > > > > 1. What kinds of testing is important in the W3C? > > /is/are/ > > /In order to promote these goals/To promote these goals/ > > > > > Conformance Testing bullet: Statement not entirely correct. > > a. Some specs include requirements for usability, performance, etc – > so if a specification contains these, then they could be tested by > conformance testing. > > b. As written, it implies that only mandatory requirements are tested > for conformance. However, optional functionality can also be tested. > Need to make it clear that any requirements (mandatory or optional) > can be subject to conformance testing (‘formally required in the spec” > is misleading). The main point is that conformance testing is bound > in scope by what is in the specification. > > > > Interoperability Testing: (…of a given specification”). Often > interoperability testing involves different specifications not just > one spec. > > > > 2. When should test development start? > > /early/earliest/ > > > > Test development isn’t actually mentioned until the third paragraph. > Suggest adding an introductory sentence such as, “Test development > starts with test planning.” > > > > Rather than say that test development process should be before spec is > frozen – we should say that test development should start as soon as > possible, so that it can be part of a feedback process, back into the > spec since it helps to identify….. > > The reason for elaborating on the ‘frozen’ is that frozen means > different things to people, and the idea is to start as early as possible. > > > > 3. Who will develop the tests? > > /appeal for contributions/appeal for contributions and volunteers./ > > > > This may be mis-named, since much of the discussion is about How to > manage Contributions. Perhaps, split this into 2 separate questions > or a 3a and 3b since they are related. 3a is Who will develop the > tests and contain the 1^st para; 3b How to manage test contributions > is all the rest. > > > > Don’t know it this goes here (or if we want to include it) > > Consider using automated test generation tools to create the tests. > This can be related to coverage – being able to generate more tests. > Also, and very important, is being able to be responsive and flexible > – that is being able to generate tests when the spec continues to > change (while its under development). > > > > Last paragraph, metadata to be supplied…(including a description of > the purpose…) Suggest we update this to reflect the test data model > elements – i.e., identifier of the test case, purpose of the test > case, reference (pointer) to the portion of the spec that is tested, > and expected results or indicator of success). The reason for > elaborating on expected results is that it may not be possible to > identify a specific result, it may be more applicable to indicate what > is success, what is failure. > > > > 7. What makes a good test? > > Last bullet: Correct > > Move and make first bullet. > > What is meant by ‘correct’ can we elaborate? Does this mean that the > test correctly tests what it says it does or that it tests the correct > behavior with respect to some feature? What about tests that are > purposely wrong – i.e, to trigger error conditions. > > > > 8. How many tests are enough? > > Missing word in “Depth measurements are more subjective, since they > require that you to estimate” > > > > Test coverage reports can also indicate to test developers which areas > of the spec require tests. > > > > 11. How should I package and publish my tests? > > This mentions ‘test harness’, for the first time. We don’t explain > what a test harness is or what it could be (since I think people may > have different definitions of it). We should either add another > Question on What is a Test Harness; or explain it here. > > It is used again in Questions 12 and 13. > > > > 12. What about documentation > > /making it easily and immediately/making the documentation easily and > immediately/ > > > > 15. How should I handle bugs in my test suite? > > Actually, you may purposely put bugs into the test suite – e.g., error > testing. Need to make sure this is not what you mean. > > > > 17. Should we implement a branding or certification program? > > /fully fledged/complete/ > > > > ‘legally risky’ Don’t this we should be concerned with this – it is > the lawyers who should judge what is risky. Suggest we say that > branding and certification are complex decisions involving legal and > business decisions. > > > > Misc: > > Examples from Schema > > The Schema test suite process document is at: > > http://www.w3.org/XML/2004/xml-schema-test-suite/XMLSchemaTS-Process.html > Section 4, Procedural Issues, might be a useful reference for FAQs 3, > 6, 9, 11, 12, 15 and 16, because it addresses who will develop tests, > provides schemas for test submission and results reporting, discusses > issues associated with the publication of test results (along with a > mechanism for the testing entity to stipulate publication scope inside > the test results themselves), describes a dispute resolution procedure > for potentially buggy tests, and so on. > > >
Received on Monday, 22 August 2005 20:31:12 UTC