- From: Dominique Hazael-Massieux <dom@w3.org>
- Date: Wed, 17 Jan 2007 16:03:01 +0100
- To: public-mwts@w3.org
Hi, As promised, here is a summary of my ideas on what this group could be developing over the upcoming few months. I think we have several options, some of which can be mixed together: * we can look at the existing conformance tests suites out there as I started to describe in my previous message [1], and try to re-package them, maybe contributing to make them more complete and more useful for user agent developers. * we can focus on making these tests suites more easily available/usable by the web community at large, so that we can invite individuals to test the conformance of the user agents they use and make it generate reports that help web developers know what bugs exist in what browsers; the idea would be to generate reports à la: http://www.westciv.com/style_master/academy/browser_support/basic_concepts.html but on a bigger scale (many more browsers), and with a collaborative approach * we can try and create a set of "acid tests" for mobile web browsers, à la: http://www.webstandards.org/action/acid2/ with the idea of testing the integration of several technologies in rather complex arrangements * another approach that I think would be interesting to consider is to focus on non-conformance tests suites (!); the idea would be to assemble and create tests cases that wouldn't focus so much on whether a given browser conforms to a given specification, but instead, to identify common browsers behaviors for things that are un- or ill-specified, and that web developers need data on. For instance, during our work on the mobile web best practices, there were at least 2 occasions where we needed a good survey of behaviors of existing mobile user agents, and came up with a series of simple tests to identify these behaviors: http://www.w3.org/2005/MWI/BPWG/techs/XhtmlBasic11Support http://www.w3.org/2005/MWI/BPWG/techs/EncodingDeclarationSupport (while I have used a pass/fail color scheme in these reports, it doesn't actually mean that the user agent is buggy per se, but only that it didn't yield a behavior one may have wished it would) Generally speaking, should we take that option, I think we would need a strong collaboration with web developer communities, so that we could get contributions of tests cases for well-known authoring techniques. There are several existing other small tests suites that would fall under that category, and that I think would be a good starting basis for such an effort: http://www.paxmodept.com/pan/index.xhtml http://t.wurfl.com/ http://www.cameronmoll.com/mobile/mkp/ (and most likely, more of them) I think each of these approaches has its own merit, and would certainly be happy to work on any of these, although I confess I have a slight preference for the last one as it would probably fill a need that no other existing efforts have filled so far. Some of these plans may require that we use rather specific software solutions: * if we invite the web community to submit tests results, we would need a system to log tests results, and probably doing so in a mobile-friendly fashion * if we invite tests cases contributions, we would need a submission system that takes into account the various policies in place at W3C on this topic (e.g. [2]) * if we develop tests cases or review existing tests cases, we would probably need some form of tests cases management system Sorry this mail is so long, but I think it summarizes most of my thinking on the topic at this point; I would very much like to get feedback on the various ideas exposed here. Thanks, Dom 1. http://lists.w3.org/Archives/Public/public-mwts/2007Jan/0007.html 2. http://www.w3.org/2004/10/27-testcases.html
Received on Wednesday, 17 January 2007 15:03:48 UTC