- From: Patrick H. Lauke <redux@splintered.co.uk>
- Date: Mon, 18 Jul 2016 14:08:21 +0100
- To: w3c-wai-gl@w3.org
On 18/07/2016 13:58, Alastair Campbell wrote: > +1 to false positive. > > > > Usually you would get non-accessibility effects to duplicate IDs (e.g. > messing up the visual appearance, or JavaScript errors) unless you’re > quite careful. The only accessibility oriented effect I can think of is > where within-page targets are used. > > > > In terms of a 2.1 change to 4.1.1 Parsing: perhaps a note to the effect > that it should be tested with the ‘rendered object model when source > code is dynamically modified by scripts’ or something? One problem with this is that user agents error-correct broken stuff, and unless you then check the DOM of every user agent for consistency, this doesn't let you spot problems such as misnesting, or trying to generate nonsensical markup using innerHTML in JavaScript, or similar. Not sure how that can be overcome though in simple wording... Maybe just adding a note to 4.1.1. warning that simply trying to run validation on the source of a webpage, as sent by the server, can often yield false positives (and in the case of sites heavily reliant on client-side rendering, it may even render no results at all), as source validators can't take into account dynamic changes (then give the example of two elements with same ID, but one of them display:none'd, as an example). I guess it's safe to then say authors should run validation on the actual DOM (some automated tools like Tenon actually do this already, running a headless UA and working off the DOM), though that will not catch errors such as badly nested / unclosed elements, since they've been already error-corrected by the UA in its work to generate a sane DOM. P -- Patrick H. Lauke www.splintered.co.uk | https://github.com/patrickhlauke http://flickr.com/photos/redux/ | http://redux.deviantart.com twitter: @patrick_h_lauke | skype: patrick_h_lauke
Received on Monday, 18 July 2016 13:08:41 UTC