Re: [w3ctag/design-reviews] Personalization Semantics Explainer and  Module 1 (#476)

Thank you very much for your detailed review and feedback on the restructured explainer according to your comments. (See Explainer-for-Personalization-Semantic <https://github.com/w3c/personalization-semantics/wiki/Explainer-for-Personalization-Semantics>

Regarding the tools that will be used, we do have a JavaScript solution as a proof of concept implementation that can be seen in this video prepared for TPAC: https://www.w3.org/2020/10/TPAC/apa-personalization.html

We also expect that this technology will be important for education publishers who use EPUB. With an HTML attribute, this information can be embedded within EPUB documents where reader software or assistive technologies can use it to assist with learning. We also expect that custom or general browser extensions will be developed by 3rd parties to assist the various disability groups.  This includes updating of AAC software tools to take advantage of the additional semantic information. Also, the addition of personalization information can enhance machine learning. For example, providing alternatives to idioms, it’s raining cats and dogs, or other ambiguous terms. Tools would parse the HTML for the personalization attributes and make the necessary substitutions into the DOM or assistive tool based on the identified user group or individualized need.

This module can also support a number of needs exposed by COGA's Content Usable (https://www.w3.org/TR/coga-usable/) such as:

   - I need (a version of) the interface to be familiar to me so that I recognize and know what will happen.

   - I need the controls to be consistently positioned on the screen where I expect them to be.

   - I need alternatives to spoken and written language such as icons, symbols, or pictures.

   - I need to sometimes avoid types of content, such as social media, distractions, noises or triggers.

   - I need personalized symbols or pictures that I can recognize immediately because learning new ones takes a long time.

   - I need the symbols and pictures that I know and recognize when I do not know a word.

   - I often need less content without extra options and features because at times I cannot function at all when there is too much cognitive overload.

   - I need symbols to help understand essential content, such as controls and section headings.

   - I need symbols that I understand and are familiar to me; recognizable, commonly used symbols; or personalizable.

   - I need symbols placed above the text to link the meaning of the words with the images.

   - I need simple, consistent content.

   - I need to avoid and recover from mental fatigue.

   - I find the design familiar such that the user interface elements such as menus, buttons and design components as well as elements common to many websites such as help and search are where I expect them to be and do not move unexpectedly.

   - I do not want distractions from my task.

   - If there are distractions, I must be able to easily turn them off.

   - I need to know the context, where I am, what I just did, or what just happened to me after I have lost cognitive focus and then need to come back to the task.

   - I know how to get help or information, such as context-sensitive help or tooltips.

To be honest we did not thoroughly investigate microformats. We are wary of relying on a specification that does not fall under the auspices of the W3C. While personalization may be a reasonable use case for this technology, it would slow down the development of the Personalization specification while working to advance microformats to meet our additional, diverse needs. We would also be very interested in hearing @tantek‘s input on this.

The I18N group raised the same question about the similarities between autocomplete and purpose. While autocomplete can only be used on form fields, the purpose values can be used on other element types. Where there is overlap with the autocomplete values, we have included the definition from the WCAG 2.1 Input Purposes for User Interface Components reference:  https://www.w3.org/TR/WCAG21/#input-purposes. We can update the purpose values section of the content module to specify that.

With regard to your question about distractions. We do understand that advertising constitutes the critical revenue stream for many content providers. However, not all distractions are third party advertisements, and may be within the sites ability to allow the user agent to remove them.
Further, the purpose of allowing users to hide (or systematically show and sequentially review) on page advertising is simply to give users the control other users have over such content. The user without a disability can ignore the add and complete the task. The user who cannot ignore it, or TAB past it conveniently, is forced to grapple with a stumbling block that prevents them from completing a task.

We believe users will choose to look at advertising because it's informative. It's an important mechanism for learning about options in life. By allowing users to control when and how they see ads, we allow them the ability to avoid becoming frustrated by processes that prevent task completion. We also allow them to see advertising as potentially useful information, not a source of frustration. Surely, we don't think a frustrated user will follow up on the ad that caused the
frustration? With regard to the cross over with ARIA. ARIA-Live covers distractions such as a ticking clock for screen reader users making the
interruptions less invasive but does not address the COGA use cases where the constant change distracts a person with ADHD etc, who does not use a
screen reader. Whether this content is essential, or if it can be removed is not addressed in ARIA.

Please let us know how we can further assist.

Thanking you in advance

The personalization task force 

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/w3ctag/design-reviews/issues/476#issuecomment-727019511

Received on Friday, 13 November 2020 20:40:46 UTC