use-cases in the explainer

I took an action item to review the explainer for add use-cases.



There are two use cases there (see https://w3c.github.io/personalization-semantics/#why_accessibility)

I think they could be improved, so I have made a new draft (see bellow)  but that is more editorial and are probably enough for this version. So if we are not happy with the draft bellow (beyond editorial), I think we could leave it as is for now





existing text:



For example, assume an author can make it programmatically known that a button sends an email. Based on user preferences, the button renders with a symbol, term, and/or tooltips that are understandable by this particular user. It could automatically include F1 help that explains the send function in simple terms. It could be identified with a keyboard short cut that is always used for send. In addition, the button could be identified as important and always rendered, or always rendered very large.

Working examples of how this could be used in practice, with user preferences, are available on the task force https://github.com/w3c/personalization-semantics/wiki/Implementations-of-Semantics.

Another use-case we would like to see is interoperable symbol set codes for non-verbal people. Products for people who are non-vocal often use symbols to help users communicate. These symbols are in fact an individual's language. Unfortunately many of these symbols are both subject to copyright and are not interoperable. That means end-users can only use one device, and can not use applications or assistive technologies from a different company. An open set of references for symbol codes for these symbol sets however, could be interoperable. That means the end user could use an open source symbol set or buy the symbols and use them across different devices or applications. Symbols could still be proprietary but they would also be interoperable.



Changed version



For example, assume an author can make it programmatically known that a button sends an email. Based on user preferences, the button renders with a symbol, term, and/or tooltips that are understandable by this particular user. It could automatically include F1 help that explains the send function in simple terms. It could be identified with a keyboard short cut that is always used for send. In addition, the button could be identified as important and always rendered, or rendered in an emphasized form.







Another use-case is symbol users. Those who have a severe speech and physical impairment the use of symbols to represent words is their primary means of communication for both consuming and producing information. Some users communicate through the use of symbols, rather than written text, as part of an Augmentative and Alternative Communication (AAC) system. Symbol users' face a wide variety of barriers to accessing web content, but one of the main challenges is a lack of standard interoperability or a mechanism for translating how a concept is represented in one symbol set to how it may be represented in another symbol set.

Examples include:

An assisted living home make adult education course and life-skills content. For example, they have content on how to make dinner using a microwave, but they have people who can read different symbols. They need content to be able to work for all their users. sometimes the symbols or pictures used are unique to the user, such as the picture of the actual person phone or cup etc.

People who know different symbol sets wish to talk to each other

A government agency are making information sheets about human rights and patient rights. they add symbols for lots of different users but they wish people who can read different symbols to be able to read it as well.

A large banking site wants people to be as autonomous as possible and use their services. they have augmented symbol references onto their core services.


It should be noted that the users who depend on symbols the most may struggle the most with miss translations, as they have severe language disabilities inferring what was meant by use of an incorrect symbol will not be achievable for many users. This rules out relying on machine learning until it is almost error free.



In another usecase a user has dyscalculia and difficulty understanding numbers. They have a can not understanding websites that use numbers to convey information. Therefore this numeric information must be provided in an alternative format that the user can understand. For example: You want to get the latest weather report for your city and go to myhttp://www.weather.com/ website. For today’s forecast, it shows a high of 95℉ and a low of 40℉, which is not helpful for this particular user. Allowing this numeric information to be presented instead as an image, symbol, or text would benefit the user (i.e. instead of 95℉, a picture of someone wearing shorts and a tee-shirt with the sun above or simply a text alternative of “Very warm”, and instead of 40℉ a picture of someone wearing a jacket with pants, or a text alternative of “Very cold”. 

It is important to note that people with dyscalculia are often very good with words, so long text can be better than short numbers.

Finally, consider someone with  autistic spectrum disorder and with a learning disabilities. They may be a slow reader but finds numbers clear and precise. They may go to the same website and find all the word and images unclear and the animations cause cognitive overload. They want the same information with more numbers and less words.  



More examples can be found on https://github.com/w3c/personalization-semantics/wiki/Use-cases. More information on persona and user needs can be found in https://www.w3.org/TR/coga-usable/.



Working examples of how this could be used in practice, with user preferences, are available on the task force https://github.com/w3c/personalization-semantics/wiki/Implementations-of-Semantics.



All the best

Lisa Seeman

http://il.linkedin.com/in/lisaseeman/, https://twitter.com/SeemanLisa

Received on Thursday, 12 December 2019 15:17:03 UTC