Minutes: User Agent 2 Oct 2014 telecon

http://www.w3.org/2014/10/02-ua-minutes.html
- DRAFT - User Agent Accessibility Guidelines Working Group Teleconference 02
Oct 2014

See also: IRC log  http://www.w3.org/2014/10/02-ua-irc
<http://www.w3.org/2014/10/02-ua-irc>
Attendees
PresentGreg_Lowney, Jeanne, Jim_Allan, Kim_Patch, Jan, [Microsoft]Regrets
ChairJim Allan and Kelly FordScribeallanj
Contents

   - Topics <http://www.w3.org/2014/10/02-ua-minutes.html#agenda>
      1. Writing tests <http://www.w3.org/2014/10/02-ua-minutes.html#item01>
      2. Sample test 2.4.1
      <http://www.w3.org/2014/10/02-ua-minutes.html#item02>
      3. Charter timeline
      <http://www.w3.org/2014/10/02-ua-minutes.html#item03>
   - Summary of Action Items
   <http://www.w3.org/2014/10/02-ua-minutes.html#ActionSummary>

------------------------------

<trackbot> Date: 02 October 2014

<scribe> scribe: allanj

open item q

open item 1

registration paid for invited experts

<Jan> I'll be here
http://www.ramada.com/hotels/california/sunnyvale/ramada-silicon-valley/hotel-overview

still working on travel.

all, discussion of travel and lodging.
Writing tests

wiki page for testing, greg to create magic script for numbers and text

<KimPatch>
https://www.w3.org/WAI/GL/mobile-a11y-tf/wiki/Technique_Development_Assignments

mobile task force, have a wiki list, pick task, due monday, in survey, then
survey reviewed at meeting

if we get tests done quickly, and hopefully few comments, then do LC and CR
simultaneously

NDIVIDUAL PAGE TEMPLATE

The individual pages are linked and named by the SC#

I will set up a template that people can copy to create the individual
page. Note that there can be multiple tests for each SC.

For each test:

Test Assertion

Procedure

Expected Result

assertion, specific thing you are testing in that test.

1 sentence

different test for browser or audio player, separate assertion for each.

need a test for every SC

<jeanne> http://www.w3.org/WAI/AU/CR20/TestPrep20131206.html

<Jan> http://www.w3.org/WAI/AU/2013/ATAG2-10April2012PublicWD-Tests

<Jan> http://w3c.github.io/UAAG/UAAG20/#gl-obs-env-conventions

js: don't over think the tests. keep it simple and generic

<jeanne> http://www.w3.org/TR/2014/NOTE-WCAG20-TECHS-20140311/G4

<Jan> brb

we (UAWG) will have to perform all of the tests, and match to
implementations

kp: the examples will help write the test
Sample test 2.4.1

2.4.1 Text Search: The user can perform a search within rendered content,
including rendered text alternatives and rendered generated content, for
any sequence of printing characters from the document character set. (Level
A)

ja: do we need an html page?

gl: what about all the other formats? can't do them all

jr: atag has an accessible page, an inaccessible page to test against

gl: are there sample test pages at least in html to test against

js: before/after page from EO

jr: really difficult to use, separate from the chrome of the EO pages

Assertion: The user can perform a search within rendered content, including
rendered text alternatives and rendered generated content, for any sequence
of printing characters from the document character set.

1. load content with text, text alternatives, and generated content

<Jan> est 0001 Assertion: All editing-views enable text search where any
text content that is editable by the editing-view is searchable, results
can be made visible to authors and given focus, authors are informed when
no results are found and search can be made forwards or backwards.

<Jan> If the authoring tool does not allow the editing of text content
(e.g. because it is a graphics editor), then select SKIP.

<Jan> For each editing view that enables the editing of text content:

<Jan> Load the accessible test content file (any level), which contains
non-text content with text alternatives, in the editing view.

<Jan> Choose a word that is repeated in the text and then determine whether
a search function exists for the editing view that can find all of the
instances of the word. In web-based tools, the search function may be part
of the user agent. If this is not possible, then select FAIL.

<Jan> When a match is found, check whether the match can be presentwed and
given focus in the editing view. If this is not done, then select FAIL

<Jan> Determine whether search is possible forwards and backwards. If it is
not, then select FAIL

<Jan> Choose a search term that is not in the content (e.g. a nonsense
word) and search for it. If no indication is made of the failure of the
search, then select FAIL

<Jan> If the editing view enables editing of text alternatives for non-text
content, choose a search term from within the text alternative. If the term
cannot be found, then select FAIL.

gl: what about searching for punctuation, etc.

<Jan> Go to the next editing view that enables the editing of text content
(if any).

<Jan> Select PASS (all of the editing views must have passed)

jr: create a page with a block of text with all kinds of characters, an
image with alt of the same block of characters, and a paragraph "this
paragraph has generated text preceding it" with the block of characters
generated from css
... create one big test file.

1. load page (with rendered text, alternative text, generated text), 2.
search for a known string, 3. observe/record results

note: need to create a page of content

2a. in rendered text, ( if pass go to next /fail), 2b. in alternative text
( if pass go to next /fail), 2c. in generated test ( if pass go to next /
fail), 2d. search for foreign language character ( if pass go to next /
fail)

1. load page (with rendered text, alternative text, generated text, foreign
language characters),

2. search for a known string in rendered text, ( if pass go to next else
fail),

3. search for a known string in alternative text ( if pass go to next else
fail),

4. search for a known string in generated test ( if pass go to next else
fail),

5. search for foreign language character ( if pass go to next else fail),

6. mark PASS

remove 6

expected results

2-5 are true

"known string" = string of text on the test page to have search success

<Jan> brb

<Jan> back

js: how exact to we need to be in defining test (concern about cross
technology applications)

<jeanne> 2-5 either PASS or are N/A

in PDF not sure there is generated text. how to make sure? do we need to?

gl: at the beginning state assumptions that some technologies may not have
all features (e.g. PDF does not have generated content)

js: there will be a comment area for each test, to explain NA

<jeanne> RESOLUTION: Use the WCAG model of writing tests with Procedure and
Expected Results.

*RESOLUTION: Use the WCAG model of writing tests with Procedure and
Expected Results.*

SC # stem: full text

procedure:

expected results:

2.4.1 done....as the Count would say .... ONE ah ah ah...

send a tests in by monday, in survey by tuesday

meeting on thursday - discuss only disagreements. severely limit smithing

tests are NOT NORMATIVE we can change anytime

"delayed smithing gratification"
Charter timeline

how long will writing test take?

<jeanne> about 130 SC = 2 each per week = 4 months

ja: 10 tests a week, could finish testing in 4 months. perhaps...

27 sc

27 gls, 45 sc in gl 1

44 sc om gl2

14 in gl 3

6 sc in gl 4

6 in gl 5
 Summary of Action Items [End of minutes]
------------------------------
 Minutes formatted by David Booth's scribe.perl
<http://dev.w3.org/cvsweb/%7Echeckout%7E/2002/scribe/scribedoc.htm> version
1.138 (CVS log <http://dev.w3.org/cvsweb/2002/scribe/>)
$Date: 2014-10-02 18:32:17 $
------------------------------


-- 
[image: http://www.tsbvi.edu] <http://www.tsbvi.edu>Jim Allan,
Accessibility Coordinator & Webmaster
Texas School for the Blind and Visually Impaired
1100 W. 45th St., Austin, Texas 78756
voice 512.206.9315    fax: 512.206.9264  http://www.tsbvi.edu/
"We shape our tools and thereafter our tools shape us." McLuhan, 1964

Received on Thursday, 2 October 2014 18:55:28 UTC