Re: How to keep test results for separate products?

On Jun 1, 2012, at 8:58 AM, Jeanne Spellman wrote:

> Thanks for being so willing to help.  The AUWG is now focusing on
> writing test cases, so a working test harness is a big help in
> conceptualizing how to design the manual tests and their instructions.
> Having the ability to separate the results by product, however is
> crucial, and I appreciate that you are willing to help find a solution.
> 
> I'm sorry that this is considered a new twist,

I meant 'new to me' as in I hand't thought about this aspect so far. 

> because it was included
> in the Requirements document during the charter approval process. I
> went looking for the requirement for testing authoring tools today,
> but couldn't find it in the current wiki page, but was able to find it
> in the history starting from 8 April 2011.  The UAWG will also need
> this ability, as they will have to test media players, both standalone
> and embedded.

The framework actually predates the charter by a few years… I'm happy to add to it to meet these needs.

> 
> Regards,
> 
> jeanne
> 
> On 5/31/2012 11:34 AM, Linss, Peter wrote:
>> On May 30, 2012, at 1:14 PM, Jeanne Spellman wrote:
>> 
>>> I have been running some sample tests for ATAG and quickly realized that I have no way to save the results with the name of the authoring tool being tested.  It looks to me as if the way I will have to use the test harness is to keep a separate spec-name/data for each product being testing and have the manifest file return to the same test suite location.
>>> 
>>> Since I haven't worked with the submitted/approved directory structure yet, I don't know the impact of what you are proposing.
>>> 
>>> I would like to ask your recommendations of the best way to manage testing different authoring tools both web-based and non-web-based (e.g. Wordpress, Blogger, Dreamweaver, InDesign, Word, Drupal, etc).  We need the harness to present the instructions to the tester, record the results and produce pass/fail reports by authoring tool.
>>> 
>>> 
>> 
>> Hi Jeanne,
>> 
>> this is an interesting twist, the framework was designed for testing user agents, not authoring tools. It presumes a static test and multiple viewers.
>> 
>> One way to handle this is to use the multiple format support of the framework, making one 'format' for each authoring tool. The framework then needs to get a per-suite switch to break out results by format instead of user agent.
>> 
>> Let me give this some more thought on how best to handle it…
>> 
>> Peter
> 
> -- 
> _______________________________
> Jeanne Spellman
> W3C Web Accessibility Initiative
> jeanne@w3.org

Received on Friday, 1 June 2012 16:15:38 UTC