Re: Meeting note for Teleconference on 28 May 2014

Hi Kamyar,

thank you for minuting.

We keep the meeting notes on the mailing list. Only the action items go 
into the wiki. I have updated the page:

https://www.w3.org/community/auto-wcag/wiki/index.php?title=Action_items&oldid=64

Kind regards
Annika

Am 30.05.2014 14:15, schrieb kamyar:
> @Wilco: Would you please put this note somewhere on auto-wcag wiki
> pages? I am not sure where is the best place for it.
>
>
> Automated WCAG Monitoring Community Group
> Teleconference 2014-05-28, 14:00 - 15:00 h (CET)
>
> Attendees:
> Annika Nietzio , Emmanuelle Gutiérrez, John Hicks, Kamyar Rasta , Wilco
> Fiers
>
> ITEM 1: Welcome and introduction
> Short presentation of each participant:
>
> John Hicks is accessibility specialists for last ten years he was
> developing software in Urbilog company, France [1] for five years. He is
> from Britain and has Phd from Edinburgh was a incognitive scientist by
> training. He is working as a consultant now and not developing any more.
> He is working with automatic checkers. I am not here from commercial
> point of view more interested to contributing to the what could be come
> from WCAG standards. Personal interest: getting details like four level
> of testing in HTML, DOM in terms of unit test.
>
> Wilco Fiers (Accessibility Foundation, NL) is a front-end and back-end
> developer with several years of experience as accessibility evaluator.
>
> Annika Nietzio (Forschungsinstitut Technologie und Behinderung, DE) is a
> web accessibility expert with focus on evaluation methodologies,
> automated checking of web accessibility, and accessibility benchmarking.
>
> Kamyar Rasta (Tingtun, NO) is a back-end developer for accessibility
> conformance testing.
>
> Emmanuelle Gutiérrez y Restrepo (Sidar Foundation, ES) has long time
> experience in web accessibility. Currently, she is working on a new
> monitoring tool for Portugal, Spain, and South America.
>
> ITEM 2: The goals of the group and motivations
> Wilco: introduction of group.
>
> John:  working for urbilog we develop two tools:
> 1. ocawa [2]: an old tool automated testing tool and assistant.
> Everything that can be automated would be collected to a set of
> questions and answers. It is also used for auditing in companies.
> The problem it had was it was written in PHP and once you try a website
> even with fifteen pages there were a significant amount of data and it
> took a long time.
>
> 2. Expertfixer [3]: New product and more standard tool.
>
>
> ITEM 3: work on introduction and template [4]
> Wilco: We need to be more concise on what wording are we going to use. I
> like test description. We try to create human readable text that can be
> easily implemented.
>
> Annika: test description is too general. Something that provide
> additional details that should be take on to account when you implement
> something.
>
> Action: We should decide on wording and vocabulary later.
>
> Discussions on the Kamyar's suggestion [4] about having unit tests on
> each test:
>
> Kamyar: It would be useful for developers to have sample HTML code
> (similar to WCAG techniques), test the implementation against this HTML.
>
> Annika: Try to refer to content that are available (example code on WCAG
> techniques) and if couldn't find any, we can provide it ourselves.
>
> Wilco: It is good to find what level detail to describe in our tests. We
> can specify our tests in more formal logic and grammar we could use or
> even writing the test description in pseudo code.
>
> John: We had an implementation of automatable portion of WCAG in a sort
> of expert system. A web page full of declarations of facts and when you
> test it against the system using WCAG was implementation all possible
> facts. The idea of unit tests is to find cases that there is a fail and
> why it is failed.
>
> Annika: On the one had we raised the question: do we need unit tests and
> how complete do these have to be? Another question is they need to be
> machine readable so should we use XPATH or something similar or it is
> sufficient to have human readable selectors and also the question about
> do we need pseudo code  to describe the test or is it sufficient to use
> human language. We have to decide how technical the test should be. WCAG
> is not that easy implementable as expected.
>
> Wilco: More interested in a human readable format and rely on developers
> to translate that into machine code.
>
>
> Discussions on SC141-a [5]:
>
>
> Annika: In the steps because this is automatable test it is almost
> pseudo code which is sufficiently human readable too.
>
> Annika: we are using MediaWiki and we can provide a MediaWiki template
> that generates nice test description.
>
> Annika: A new assumption could be added to SC141-a. My assumption here
> would be “if there is no CSS at all we left the user agent take care of
> rendering it in an useful way and that shouldn't cause any problem”
>
> Wilco: The key issue here is computed style. Even if you don't apply the
> CSS there is still computed style.
>
> Wilco discussions with shady about the results: It would be good if we
> could more easily transform the result into Evaluation and Report
> Language (EARL) by w3c [6]. It was mostly design for WCAG evaluation.
> Annika: these EARL has not been taking into consideration with the
> (result) tables in the SC141-a.
> We have not compare the tables with EARL in details but many of these
> results are available in EARL. We can check if this could be done.
>
> Wilco: Next week our focus should be redefine the formal way to describe
> all these things.
>
> ACTIONS:
>
> Annika: looking at results tables and find a way to match them a bit
> more to EARL. How close they are match to it.
>
> Kamyar: Try to find more concise way to to apply the information for the
> test SC1.4.1.
>
>
> John: Find a close and more appropriate way to describe things in more
> formal language. Particularly, the selectors. Make the selectors match a
> bit to what we have done in the “test steps”.
>
> Wilco: Try to take inspiration from HTML5 specification for their formal
> grammar.
>
> Next week: comments on discussion page for the test template will be
> discussed.
>
> DECISION ON WEEKLY MEETING:
> Thursdays 15:45 – 16:45
>
>
> SUMMARY OF DECISIONS:
> Brief look at how to deal with human language versus machine readable
> language we have concluded that we go through more machine readable
> language. The way that we describe the statements in the test steps and
> we are going implement that further and develop that further.
>
> REFERENCES:
> [1] http://www.urbilog.fr
> [2] http://www.ocawa.com/fr/Accueil.htm
> [3] http://www.urbilog.fr/recherche-developpement/expertfixer
> [4]
> https://www.w3.org/community/auto-wcag/wiki/Introduction_to_auto-wcag_test_design
> [5] https://www.w3.org/community/auto-wcag/wiki/SC141-a
> [6] http://www.w3.org/WAI/intro/earl.php
>
>

Received on Thursday, 5 June 2014 08:13:37 UTC