- From: Jason Brittsan <jasonbri@microsoft.com>
- Date: Wed, 21 Nov 2001 12:54:26 -0800
- To: "Dimitris Dimitriadis" <dimitris@ontologicon.com>, <www-dom-ts@w3.org>
If a developer is testing their implementation, they will be required to run every test every time. Allowing developers to test small portions of their implementation is a much more efficient use of time. -----Original Message----- From: Dimitris Dimitriadis [mailto:dimitris@ontologicon.com] Sent: Wednesday, November 21, 2001 10:24 AM To: www-dom-ts@w3.org Subject: Re: ECMA harness I took this up with the DOM WG on this week's telephone conference, and the proposed route is that the Test Suite stays intact, which means running all the tests, but the reporting layer is built to be able to report more detailed results, one such being that the implementation tested did pass these tests, but not those, tests that in turn do not support exceptions (or something else). This way, we could have a keyword-driven run of the tests, and even though all the tests are run, the reporting layer sees to that the report clearly states which (virtual) subset of tests has been satisfactorily run and which one has not. I think this is a good compromise between the current layout of the DOM TS and the need for being able to de-select tests that presuppose particular features of the DOM implementation. It would achieve the same thing as modularizing the DOM TS on a low level while only requiring efforts to produce a better reporting layer. Any comments on this? /Dimitris On Saturday, November 17, 2001, at 07:08 PM, Dimitris Dimitriadis wrote: > Hi Jason > > Thanks for the clarification. I think the general outline of your > submitted example is quite good, please help those producing the > particular framework. > > However, I do have one concern; the DOM TS aims at testing > imnplementations' support for the DOm specification, thus also > exceptions. I don't know if leaving them out from the TS is compatible > with the aims of it, I will however investigate this with the DOM WG > and get back as soon as possible. > > /Dimitris > > On Friday, November 16, 2001, at 09:26 PM, Jason Brittsan wrote: > >> Hi Dimitris, >> >> My comment could take on several meanings, depending on the design of >> the test harness. Originally, I meant that there should be an option >> to >> only run the tests that match the capabilities of the client in >> reference to HTML-only implementations and implementations that do not >> support exceptions. This is consistent with discussions that took >> place >> early on in the development of this test suite. I'm not proposing any >> modularization other than that. >> >> Depending on the implementation of the harness, we could also provide >> ways of running only certain portions of the test suite, based on the >> needs of the user. The current NIST harness provides some basic >> functionality by using a SELECT control to specify the "DOM Category" >> and another SELECT control to specify the "DOM Interface." The test >> case selection process could be made better. I've attached a *sample* >> of what this could look like. (THIS IS ONLY A MOCK-UP!) In addition, >> running the suite would be automated, or at least take less time for a >> person to run the conformance suite. >> >> Flexibility in test case selection would lead to more useful reporting. >> Users would only get back the information they desire instead of >> sorting >> through test case areas that don't concern them. Also, the results >> would be posted on a single HTML page (or XML file?) instead of >> requiring the tester to visit each interface, record the results, and >> move on to the next interface. >> >> -Jason >> >> -----Original Message----- >> From: Dimitris Dimitriadis [mailto:dimitris@ontologicon.com] >> Sent: Friday, November 16, 2001 12:23 AM >> To: Jason Brittsan >> Cc: Mary Brady; www-dom-ts@w3.org >> Subject: Re: ECMA harness >> >> Hi Jason >> >> Please provide a more detailed account of what this would mean; running >> different parts of the test suite? Have different test suites built to >> begin with in accordance with existing browser capabilities? Currently >> we haven't limited the suite in any other way than defined by the DOM >> specification, except for entity exapansion and whitespace preservation >> in parsers. >> >> In order to come to an understanding about the harness thus: how should >> we act in this matter? Should we modularize the test suite in some way? >> We have discussed this in the past, and given the fact that we want to >> release the test suite as soon as possible it seems a good idea to make >> this explicit very soon. >> >> In our previous discussion though, we decided to go for using a >> modularization that stayed as close as possible to the DOM >> specification. On the other hand, we obviously want to be able to run >> the test suite in all major browsers. IE can be tested running the Ecma >> tests with the JSunit framework by running ant dom1-core-gen-jsunit to >> build the appropriate code. >> >> /Dimitris >> >> >> On Monday, November 12, 2001, at 02:32 PM, Jason Brittsan wrote: >> >>> Hi Mary... my apologies for not responding sooner. Today is my first >>> day back from vacation. >>> >>> Of course, I will be happy to provide any assistance I can with the >> test >>> harness. >>> >>> The test harness should be able to run tests based on the capabilities >>> of the client. Therefore we need to support this in the harness UI. >>> >>> I believe that flexibility in reporting is our best strategy. >>> >>> -----Original Message----- >>> From: Mary Brady [mailto:mbrady@nist.gov] >>> Sent: Tuesday, October 30, 2001 8:39 AM >>> To: www-dom-ts@w3.org >>> Subject: ECMA harness >>> >>> In building the ECMA harness, I have started with the original harness >>> that was provided from the NIST web site: >>> >>> http://xw2k.sdct.itl.nist.gov/dom/index.html >>> This harness uses whatever DOM implementation is running on the >>> client side, attempts to run available tests, and reports the results. >>> Each of the tests expect to have access to common xml load routines >>> and common assertion routines. I expect that we can use the same >>> code that is currently being used by the jsunit harness. The following >>> needs to be done: >>> >>> 1) Integrate current load/assertion routines -- Mary >>> 2) Validate load routines >>> -- IE (Jason) >>> -- Mozilla (Do we have a Netscape volunteer?) >>> 3) Validate DOMException codes >>> -- IE (Jason) >>> -- Mozilla ? >>> 4) Determine high level interface -- all >>> -- Do we want to run all tests, or be able to >>> discriminately pick appropriate tests? >>> 5) Determine reporting mechanism >>> -- simply dump returns from tests? >>> -- color-code results? >>> -- display expected vs actual? >>> -- possibly modify code to accomodate >>> what we want to display. >>> 6) Access to other testing resources ? >>> -- test assertions, <subjects> >>> -- view source code >>> -- view portion of spec being tested. >>> >>> Anything else? >>> >>> --Mary >>> >> <Attachment missing>
Received on Wednesday, 21 November 2001 15:55:25 UTC