W3C home > Mailing lists > Public > w3c-wai-gl@w3.org > April to June 2017

Combining the two.... Re: Do we like this better? - was way to move forward with plain language

From: lisa.seeman <lisa.seeman@zoho.com>
Date: Tue, 23 May 2017 14:11:22 +0300
To: Detlev Fischer <detlev.fischer@testkreis.de>
Cc: <w3c-wai-gl@w3.org>, <public-cognitive-a11y-tf@w3.org>
Message-Id: <15c35011409.121870b66229854.7691608873515638228@zoho.com>
That is why we had the word frequency list before. that was 100 percent testable and we had free tools already available.

Try 3...  How about....

Provide words, phrases  or abbreviations that are the most-common form to refer to the concept in a public word frequency list for the identified context.  




Notes
- the sc is technology agnostic s the "how" an d "what format" etc should not be discussed until be get to techniques. Although clearly it needs to be accessible. An accessibility conformance statement would say what list was used.


- we have opensource scripts for building word frequency lists (see the comments in the github issue). a script for testing words against a word list exists in other places (like the translation industry )


- we are not limiting the size of the word frequency list , so they can be as big as is needed 


- also not this can be done via added coga semantics and personlization


-The  public word frequency list and identified context are defined term, we can improve them if we heel the need - but let us first decide if this is the direction before zooming in on that.
     


Perhaps we could also change the scope to critical features as identified in issue 6


All the best

Lisa Seeman

LinkedIn, Twitter





---- On Mon, 22 May 2017 17:22:27 +0300 Detlev Fischer&lt;detlev.fischer@testkreis.de&gt; wrote ---- 

lisa.seeman schrieb am 22.05.2017 15:55: 
 
&gt; It looks like we are more comfortable with this direction - but we would need some testing tools before CR 
&gt; SO far as I know the IBM tool is not free, and the Microsoft tool requires a subscription. 
&gt; A way to move forward is put it in the next version of wcag 2.1 and reach out to the companies for a free version of the tool. 
 
In my view, any automatic tool checking the commonality of words by applying some generic algorithm will be bound to produce incorrect results in all cases where you have a site covering a specific domain with specific terms (i.e., very often). Synonyms where you can replace one term with another without also introducing a shift of meaning are the exception, not the rule. Then you have the homonym problem (same term meaning different things in diffeent contexts / domains) A tool that offers a meaningful analysis would have to be capable of inferring the respective domain and its vocabulary and adapting its algorithm accordingly. 
 
Received on Tuesday, 23 May 2017 11:12:00 UTC

This archive was generated by hypermail 2.4.0 : Thursday, 24 March 2022 21:08:13 UTC