Re: Defacto tests (Was: Tentative tests)

On Fri, Jun 2, 2017 at 5:18 AM Rick Byers <rbyers@google.com> wrote:

> On Mon, May 29, 2017 at 5:23 PM, Philip Jägenstedt <foolip@chromium.org>
> wrote:
>
>> On Thu, May 25, 2017 at 8:13 AM, Anne van Kesteren <annevk@annevk.nl>
>> wrote:
>> >
>> > On Thu, May 25, 2017 at 1:53 AM, Rick Byers <rbyers@google.com> wrote:
>> > > My favorite example is hit-testing.  hit-testing is largely
>> interoperable
>> > > already, and it's usually fairly obvious what the correct behavior
>> is, but
>> > > it would likely be a huge effort to spec properly.  However there are
>> some
>> > > special cases, and engines do occasionally make changes to align
>> between
>> > > browsers.  In those cases it totally seems worth the effort to
>> capture some
>> > > of the discussion and web compat lessons in tests, even if we can't
>> justify
>> > > the cost of writing a full hit-testing spec.
>> >
>> > Why can't we justify that cost? If it's as interoperable as you say it
>> > should actually be fairly easy to write down... I'm also pretty sure
>> > that because it's not written down we continue to run into issues and
>> > have a hard time defining new features that interact with hit testing
>> > or mean to adjust it (such as pointer-events). That nobody has taken
>> > the time doesn't mean it's not worth it.
>>
>
> If there are real-world issues with interop around hit-testing we should
> absolutely use those to increase the priority of writing a spec. I filed this
> tracking bug
> <https://bugs.chromium.org/p/chromium/issues/detail?id=590296> in
> chromium, but still have only the single example
> <https://bugs.chromium.org/p/chromium/issues/detail?id=417667> that led
> me to file the bug. Personally I'm most interested in the "this real
> website behaves differently in different browsers and there's no agreement
> on which one is right" sort of issue, but I suppose "speccing this new
> feature was more contentious / time-consuming because hit-testing isn't
> defined" should count for something too.
>
> I've been assuming that the only right way to specify hit-testing is to
> update essentially every spec that talks about drawing something (i.e. at
> least all CSS specs) to also explicitly describe hit-testing.  That's what
> I'm saying seems like a huge undertaking for relatively little benefit. But
> maybe there's a more creative way we could do this that would be
> worthwhile?  Eg. a single spec that defines hit-testing as a relatively
> small delta to all the painting specs somehow?  I have a hard time
> imagining how this could be very precise, but maybe that's ok?
>
> Anyway I don't have a particularly strong opinion on this - I'm all for
> someone giving speccing this a shot. I just think it's complicated and
> contentious enough that I don't have anyone on my team who I could
> reasonably ask to do it.  But I certainly could ask people to land tests
> for some of the edge case behavior of hit-testing.
>
>
>> To spell it out, I suppose the concern with adding tests before
>> there's a spec is that it would affect the likelihood of a spec ever
>> being written? That seems plausible in some cases, as adding another
>> defacto test will always be less work than writing a whole new spec.
>>
>
> Yeah we definitely don't want to use tests as a crutch, I agree with that.
> I was actually hoping this would be the opposite in practice though. Eg.
> rather than everyone saying "oh that's hit-testing which is undefined, just
> paper over it because it's way too hard to do anything else" (the status
> quoa for over a decade), we'd have a foot in the door. We could start
> building up a body of interesting tests, and those would cause some
> discussion ("why should this test do X but that one does Y?") which would
> naturally build to ".... that makes sense, but we really should write that
> down in a spec somewhere".
>

If we don't allow tests for things that don't yet have a spec, I believe
the most common scenario will be the status quo of each vendor writing the
tests in their own repo and still not writing a spec. Realistically, when a
bug is filed and I write a test and change behavior, it's not practical to
expect me to block fixing that one bug on writing a spec if no such spec
exists.

Like Rick, I believe that building up a corpus of tests makes it more
likely a spec will get written eventually. It exposes more the areas where
we lack interoperability (increasing the visibility of the need for a
spec), actually improves interoperability (makes it easier to write a spec
since browsers agree more) and directly makes it easier for a spec editor
to write a high quality first draft.


> Still, there are probably cases where the options are shared defacto
>> tests and no spec, or no shared tests and no spec. If we could
>> magically allow for just those cases, I guess that'd be
>> uncontroversial?
>>
>
>
>

Received on Saturday, 3 June 2017 06:01:35 UTC