Re: Defacto tests (Was: Tentative tests)

On Mon, May 29, 2017 at 5:23 PM, Philip J├Ągenstedt <>

> On Thu, May 25, 2017 at 8:13 AM, Anne van Kesteren <>
> wrote:
> >
> > On Thu, May 25, 2017 at 1:53 AM, Rick Byers <> wrote:
> > > My favorite example is hit-testing.  hit-testing is largely
> interoperable
> > > already, and it's usually fairly obvious what the correct behavior is,
> but
> > > it would likely be a huge effort to spec properly.  However there are
> some
> > > special cases, and engines do occasionally make changes to align
> between
> > > browsers.  In those cases it totally seems worth the effort to capture
> some
> > > of the discussion and web compat lessons in tests, even if we can't
> justify
> > > the cost of writing a full hit-testing spec.
> >
> > Why can't we justify that cost? If it's as interoperable as you say it
> > should actually be fairly easy to write down... I'm also pretty sure
> > that because it's not written down we continue to run into issues and
> > have a hard time defining new features that interact with hit testing
> > or mean to adjust it (such as pointer-events). That nobody has taken
> > the time doesn't mean it's not worth it.

If there are real-world issues with interop around hit-testing we should
absolutely use those to increase the priority of writing a spec. I filed this
tracking bug <> in
chromium, but still have only the single example
<> that led me
to file the bug. Personally I'm most interested in the "this real website
behaves differently in different browsers and there's no agreement on which
one is right" sort of issue, but I suppose "speccing this new feature was
more contentious / time-consuming because hit-testing isn't defined" should
count for something too.

I've been assuming that the only right way to specify hit-testing is to
update essentially every spec that talks about drawing something (i.e. at
least all CSS specs) to also explicitly describe hit-testing.  That's what
I'm saying seems like a huge undertaking for relatively little benefit. But
maybe there's a more creative way we could do this that would be
worthwhile?  Eg. a single spec that defines hit-testing as a relatively
small delta to all the painting specs somehow?  I have a hard time
imagining how this could be very precise, but maybe that's ok?

Anyway I don't have a particularly strong opinion on this - I'm all for
someone giving speccing this a shot. I just think it's complicated and
contentious enough that I don't have anyone on my team who I could
reasonably ask to do it.  But I certainly could ask people to land tests
for some of the edge case behavior of hit-testing.

> To spell it out, I suppose the concern with adding tests before
> there's a spec is that it would affect the likelihood of a spec ever
> being written? That seems plausible in some cases, as adding another
> defacto test will always be less work than writing a whole new spec.

Yeah we definitely don't want to use tests as a crutch, I agree with that.
I was actually hoping this would be the opposite in practice though. Eg.
rather than everyone saying "oh that's hit-testing which is undefined, just
paper over it because it's way too hard to do anything else" (the status
quoa for over a decade), we'd have a foot in the door. We could start
building up a body of interesting tests, and those would cause some
discussion ("why should this test do X but that one does Y?") which would
naturally build to ".... that makes sense, but we really should write that
down in a spec somewhere".

Still, there are probably cases where the options are shared defacto
> tests and no spec, or no shared tests and no spec. If we could
> magically allow for just those cases, I guess that'd be
> uncontroversial?

Received on Friday, 2 June 2017 12:19:25 UTC