- From: Dimitris Dimitriadis <dimitris.dimitriadis@improve.se>
- Date: Mon, 9 Apr 2001 13:17:20 +0200
- To: www-dom-ts@w3.org
Thanks very much for your notes, Markus, and welcome to the mailing list (you too, Jason). I've taken the liberty of dividing your email into more readable sections and posted them as separate emails to the list. I've included some comments of my own and look forward to seeing the discussion start. /Dimitris -----Ursprungligt meddelande----- Från: Markus Mielke [mailto:mmielke@microsoft.com] Skickat: den 8 april 2001 22:42 Till: www-dom-ts@w3.org Ämne: Ideas collected during the W3C QA Workshop We collected a couple of ideas and statements during the W3C Quality Assurance Seminar, which I would like to present to the mailing list for further discussion. I wrote up the first draft from my memory and collected comments by all other participants. Right now, everything is collected in one big e-mail but if people are interested in certain aspects please copy the respective part and open up a separate mailing thread. Looking forward to feedback. -- Markus _______________________________________ Summary of ideas collected by: MaryBrady [mb], NIST Jason Brittsan [jb], Microsoft Dimitris Dimitriadis [DD], Improve Markus Mielke [mm], Microsoft Technical details: * To decrease time in test suite development: 1. Provide a library of individual test cases contributed by members. These test cases may be of a specific language binding to speed up the process. Ideally, these test cases are already written and can be easily submitted. (OK; DD, MB.) 2. Develop a generic framework (using XML and transformation methods), which allows for multiple language bindings and automated test generation. (Being developed; DD.) a. NIST will provide an updated look at this language (2 weeks) * Modularization of test cases. The problem is that the DOM Spec explicitly calls out exceptions to allow multiple implementation systems to be compliant. For example, in DOM Level 1 there are notes for systems that do not support the concept of reporting exceptions and implementations that are HTML-Only or XML. The best way to incorporate these exceptions in the spec is by having a modularized test suit. [Categorization according to specification, being looked into; DD.] Yes, but maybe we should air some of the concerns on the dom-ts list and try to come to agreement on the categorization [mb] Suggestion for DOM Level 1: o Core: HTML-Only With Exceptions o Core: HTML-Only Without Exceptions o Core: XML With Exceptions o Core: XML Without Exceptions o HTML (Exceptions are not used in HTML) Dom Level 2: ??? Dom Level 3: ??? Would it be more useful to try to provide configurable exceptions - maybe a set of entity def's that captures exceptions thrown by a particular vendor, which are then used in the tests, or not check for exceptions at all in cases where the method used is optional. Doesn't the spec indicate that an exception must be thrown? [mb] The entity def approach could work, but it would require the XML file to be edited/regenerated every time the script error message changes. (I don't know how often this might happen). Also, the spec states "Some languages and object systems do not support the concept of exceptions. For such systems, error conditions may be indicated using native error reporting mechanisms." So, the spec says that an error must be thrown, but doesn't mandate that the error must be an exception. [jb] * Beside the test development currently in progress it is useful to have a test spec to give a quick overview of areas covered, allow the participants to verify the reasoning behind the grouping of tests and to expose possible holes in the coverage. (Definitely, Action item on me; DD.) Yes - I agree - I'll get a complete list of our semantic requirements together for you - you may be able to start from there. [mb] In addition to the semantic requirements, I think that a list of test cases is necessary. Also, a test matrix would be useful. [jb] * We need to guarantee platform independence of our tests. Tests should not rely on the tree structure unless explicitly specified by the spec. (I agree with the first sentence, not quite sure what you mean by the second; DD.) Are you pointing to the problems that are exposed as a result of whitespace issues and entity expansion issues? If so, do you think that this could be handled by toggling expected results based on whether an implementation has these features turned on or off, or is it more difficult? [mb] I think the issue being addressed here is that tests shouldn't expect elements in any particular order. For instance, a test should not expect a list of attributes on an element in any particular order. It should only expect that the attributes are present. [jb] Structural details: * To be successful in the future, we need a solid foundation of our testing structure. * We need a database for issue tracking. E-mail (even archived) is not sufficient. (W3C CVS; DD) The sooner we get this configured for our use and the proper folks have access to it, the better. [mb] CVS is strictly a source management system. There needs to be a web based issue (bug) tracking system that is available to all member of the dom-ts list. There are defect tracking tools available at http://www.download.com, but I can't make any recommendations. [jb] * Test case submissions should be entered in a database to insure all submissions are tracked, backed up and easily accessible. This process should also incorporate a source control solution. (W3C Tracking system; DD) I'm not familiar with the W3C Tracking system - can I get more info? [mb] I believe this can be done with a combination of a source control utility (CVS) and a simple database for entering the date/time, submitter, case title, etc. [jb] * The desired XML test case description format should be published. It would also be helpful to provide a template and/or sample test cases. (Allow two weeks; DD) I agree, but there are some issues that we should probably discuss on the e-mail list - when we are fairly sure that what we have is sufficient, let's make it available officially, and have one place that we can all go to get the latest version - it may change slightly as we go along and uncover additional needs. [mb] Agreed. One centralized location and an official release are what we need. [jb] Open issues: * The idea is that the test suite is platform independent. How do we provide tests for definitions, like the attribute collection? The number of returned attributes is dependent on the DTD of the given platform. Is there a min-bar, like the attribute collections exists or not to validate these kinds of tests? (DD to W3C WG) Can you forward a concrete example of this in the form of a test? [mb] I think this issue goes along with the last bullet under technical details. Attaching a test case for the attributes collection: In IE, this case will alert approximately 80 attributes because assign all (or nearly all) attributes a default value of an empty string. Netscape 6 assigns no default attributes. A min-bar would dictate a minimum set of attributes that should be defined (either by default of by declaration). [jb]
Received on Monday, 9 April 2001 08:39:32 UTC