Re: [whatwg] Supporting feature tests of untestable features

CSS.supports() should in theory cover all CSS features. Javascript as ever
can test which features are available. Neither guarantees correct
functioning, only presence - besides, all software has bugs, so at what
point do you draw the line between "broken" and "working"?

Things like canvas anti-aliasing are AFAIK implementation details, and
optional in the spec. I don't think it makes sense to make it easy to
detect things which are not mandated by the spec, or are implementation
details, because then you could end up with a lot of web content depending
on some particular implementation detail (which happens anyway, but this
would make it worse). It also seems it would be very difficult to spec the
feature-detection if the feature isn't specified or is an implementation
detail, because in order to specify the feature detection, the feature
would have to be specified! Especially with something like antialiasing
which has a long list of different types of algorithms that can be used,
are you going to spec all of them? Which count and which don't? What if
there is variation between implementations of the same algorithm, e.g. to
use a different rounding, so that in practice it looks identical but the
pixel data is different? This even applies to bugs/quirks: if you want to
add a feature to indicate the presence of a bug or quirk, that would have
to be comprehensively specified... and what if the quirk varies depending
on the environment, e.g. across OS versions?

Bugs and quirks tend to correlate with version ranges of particular
browsers, e.g. one issue may affect Firefox versions 24-28. The user agent
string is a huge mess, but it is possible to sensibly and
forwards-compatibly parse it for this information. It's hard to see any
better way to work around these types of issues.

Ashley





On 8 April 2015 at 13:59, Kyle Simpson <getify@gmail.com> wrote:

> A lot of the "untestable" bugs have been around for a really, really long
> time, and are probably never going away. In fact, as we all know, as soon
> as a bug is around long enough and in enough browsers and enough people are
> working around that bug, it becomes a permanent "feature" of the web.
>
> So to shrug off the concerns driving this thread as "bugs can be fixed" is
> either disingenuous or (at best) ignorant of the way the web really works.
> Sorry to be so blunt, but it's frustrating that our discussion would be
> derailed by rabbit trail stuff like that. The point is not whether this
> clipboard API has bugs or that canvas API doesn't or whatever.
>
> Just because some examples discussed for illustration purposes are bug
> related doesn't mean that they're all bug related. There **are** untestable
> features, and this is an unhealthy pattern for the growth/maturity of the
> web platform.
>
> For example:
>
> 1. font-smoothing
> 2. canvas anti-aliasing behavior (some of it is FT'able, but not all of it)
> 3. clamping of timers
> 4. preflight/prefetching/prerendering
> 5. various behaviors with CSS transforms (like when browsers have to
> optimize a scaling/translating behavior and that causes visual artifacts --
> not a bug because they refuse to change it for perf reasons)
> 6. CSS word hyphenation quirks
> 7. ...
>
> The point I'm making is there will always be features the browsers
> implement that won't have a nice clean API namespace or property to check
> for. And many or all of those will be things developers would like to
> determine if the feature is present or not to make different decisions
> about what and how to serve.
>
> Philosophically, you may disagree that devs *should* want to test for such
> things, but that doesn't change the fact that they *do*. And right now,
> they do even worse stuff like parsing UA strings and looking features up in
> huge cached results tables.
>
> Consider just how huge an impact stuff like "caniuse" data is having right
> now, given that its data is being baked into build-process tools like CSS
> preprocessors, JS transpilers, etc. Tens of millions of sites are
> implicitly relying not on real feature tests but on (imperfect) cached test
> data from manual tests, and then inference matching purely through UA
> parsing voodoo.

Received on Wednesday, 8 April 2015 15:35:11 UTC