RE: HTML5 default implicit semantics

Correction: I previously wrote "Its property is "slider", its role is "volume adjustment" and its state is "50%"."

I constantly get property and role mixed around, in actual fact that should have read: Its role is "slider", its property is "volume adjustment ", and its state is "50%".



> -----Original Message-----
> From: John Foliot []
> Sent: Friday, November 6, 2015 11:24 AM
> To: 'Steve Lee' <>
> Cc: 'public-cognitive-a11y-tf' <>; 'W3C WAI
> Protocols & Formats' <>
> Subject: RE: HTML5 default implicit semantics
> Steve Lee [] wrote:
> >
> > So ARIA is for describing existing UI elements only? Allow in AT to transform it?
> Not exactly. ARIA is used to communicate the Role, State and Property of
> interactive components to the Accessibility APIs - the screen readers then
> "transform" that information into the format requested by the end user. For
> example, once the information is communicated, most screen reader software
> today can output that information using either a text-to-speech engine OR to a
> Braille output bar: different modalities (Audio versus Tactile), same content. (I
> may be splitting hairs here when I cringe at "describing", as it is more
> communicating than describing - for example I suspect that blind users don't
> care about the color of a slider widget - which would be a description - but
> rather are concerned about the functionality of it: It's property is "slider", it's
> role is "volume adjustment" and it's state is "50%").
> From the W3C website:
> <start>
> Technical Solutions
> More specifically, WAI-ARIA provides a framework for adding attributes to
> identify features for user interaction, how they relate to each other, and their
> current state. WAI-ARIA describes new navigation techniques to mark regions
> and common Web structures as menus, primary content, secondary content,
> banner information, and other types of Web structures. For example, with WAI-
> ARIA, developers can identify regions of pages and enable keyboard users to
> easily move among regions, rather than having to press Tab many times.
> WAI-ARIA also includes technologies to map controls, Ajax live regions, and
> events to accessibility application programming interfaces (APIs), including
> custom controls used for rich Internet applications. WAI-ARIA techniques apply
> to widgets such as buttons, drop-down lists, calendar functions, tree controls
> (for example, expandable menus), and others.
> WAI-ARIA provides Web authors with the following:
>  • Roles to describe the type of widget presented, such as "menu",
> "treeitem", "slider", and "progressmeter"
>  • Roles to describe the structure of the Web page, such as headings,
> regions, and tables (grids)
>  • Properties to describe the state widgets are in, such as "checked" for a
> check box, or "haspopup" for a menu.
>  • Properties to define live regions of a page that are likely to get
> updates (such as stock quotes), as well as an interruption policy for those
> updates—for example, critical updates may be presented in an alert dialog box,
> and incidental updates occur within the page
>  • Properties for drag-and-drop that describe drag sources and drop
> targets
>  • A way to provide keyboard navigation for the Web objects and events,
> such as those mentioned above </end>
> (
> > Hmm. So How will the work By Mozilla, Microsoft and others on
> > javascript interactions with a11y APIs impact that? I'd say it will
> > open it right up to experimental approaches including those that
> > manipulate the DOM (which I think is what you are indicating there is
> > resistance to, at least for built in semantics).
> Actually, no, I don't think the resistance is to DOM manipulation, but rather to UI
> manipulation - it's a visual design thing mostly (as I understand the browser
> vendors concerns).
> I'll give you an example: we have in HTML5 the @required attribute, which when
> applied to a form input then allows for some rudimentary testing *by the
> browser* (i.e. if it is blank, the browser spits out the error message, with no
> scripting required). However, we also have aria-required, which communicates
> the same fact to the Accessibility API, but has no impact on the UI. While it may
> seem intuitive to try to more closely align the behavior of the two attributes,
> there is in fact resistance to this idea, focused primarily on visual design
> considerations. Whether this is right or wrong is not the issue, it is what it is.
> (see: for more details and a better
> elaboration on this point)
> > Scriptability is good in my book. "Embrace the caos" and make life
> > better for users. :) I guess they are concerned about adding further
> > complexity to the predefined browser behaviour; scripting is someone else's
> problem.
> Yes, I believe this would be grouped under "Web Components", and work in that
> area (and related accessibility concerns) is happening in the Web Platform
> Working Group (the successor to the HTML5 Working Group). Leonie Watson
> and Chaals McCathieNevile are 2 of the Working Group chairs, and I have
> confidence that they are watching this space carefully.
> >
> > > Personally, I still hope to look to more native methods to address
> > > many of
> > these issues, resorting to coga-* attributes as a last resort
> > solution, rather than a first-pass one. My other fear is that by
> > collecting all of our "accessibility solutions" under an ARIA banner,
> > we perpetuate a ghetto-ization of accessibility
> > - a perpetuated "us and them" mentality, rather than just good
> > practice that aids all, irrespective of their individual needs or how
> > they identify (which is how *I* see personalization ultimately playing-out
> BTW).
> >
> > What do you mean by more native? Mechanisms baked into HTML to alow
> > suitable modifications? Bring it on :)
> Sort of. Allow me to illustrate:
> Different groups within the larger "accessibility" space have different needs,
> based upon their disability (ies). When it comes to semantics, I think most of us
> would agree that using <h1> (the "native" construct) is preferable to using <div
> role="heading" aria-level="1">, even though semantically they are equivalent.
> So, broadly speaking it is preferable to use native semantics over added
> semantics (well, at least to me), and ARIA was created to allow authors to back-
> fill or stop-gap holes in their "Dynamic HTML" that was creating interaction
> problems for screen reader users, but not to _replace_ native semantic
> constructs.
> Another example of using a "native" construct is related to <video>. Here, deaf
> users will require captions to be able to fully engage with the video content. The
> "native" HTML5 solution for that was to introduce the <track> element, along
> with the @kind attribute, and one of those @kind values was captions (another
> was description - envisioned for, for example, sign language tracks to augment
> the video stream). I don't think I need to remind everyone how captions benefit
> more than just deaf users (the TVs in a sports-bar scenario being a common
> example here). Thus, I posit that it would have been a bad design decision to
> suggest minting an aria-captions attribute, even though using the "all our
> accessibility eggs in one ARIA basket" argument would support doing just that. I
> am suggesting that we think about this for COGA support as well - there are
> many things here that will benefit more than just those clinically defined as
> "cognitively disabled". (Man, I so truly hate labels).
> >
> > > I remain of the opinion that we should be engaging now with other
> > > Working
> > Groups within the W3C (for example, perhaps Web Annotations WG -
> >, sharing our needs and use-cases and
> > working collaboratively with them for more native solutions. But that's just
> me.
> >
> > +1 Given the concerns about adoption we'll need as many people as
> > possible singing from the same hymn sheet (as long as it is pro coga - heh).
> Agreed. All the more reason we engage with them in finding solutions, as
> opposed to showing up with a solution and then somehow trying to figure out
> how to get them to adopt that solution - one that they were never part of
> crafting in the first place. As much as I wish it wasn't so, we simply don't have
> the juice to be making demands on anyone - we need to work collaboratively. It
> is my experience that when you approach engineers with a problem statement,
> and a clearly defined outcome requirement, they can come back with solutions
> that we may never have contemplated, but solutions that scale better in *their*
> environment.
> > Trouble is there will be resistance and delay. So on ballance, I think
> > the current approach of getting a clean initial position together make sense.
> I believe the resistance would be mostly centered on us trying to impose a
> solution. As I spoke with many non-accessibility people at TPAC last week, I was
> struck by the desire from others to fully understand our needs, so that those
> other working groups could make informed decisions. Having well-formed and
> well-articulated problem statements and use cases is critical; having an example
> (perhaps with a proof of concept solution) is beneficial, as long as we remain
> open to alternatives to solving the problem statements, which (I believe) brings
> me full circle - we need to work with the other groups collaboratively so that
> everyone is "happy" with the final solution(s).
> JF

Received on Friday, 6 November 2015 17:41:50 UTC