"priority" of tests

Good morning,

I'm trying to work out how to prioritize test failures seen with Web Platform Tests.

We've had this discussion in the past, but I'm wondering if anyone on this list has had any inspired discovery or realization that might make things a bit better...

I know for browser vendors this is incredibly challenging. Say we see 100 failures in one test file, currently there is no way for me to know if those 100 failures are more or less important to the web than a single failure in some other test file. Of course, the priority for Edge cannot be determined by Chrome, so I am not asking for browser vendors to somehow dictate this. I'm wondering instead if there is a way we could have the people who write the tests or the people who write the specs (or both) come to some type of ranking.

I am not sure how it looks. Perhaps "We've seen this construct actually used on sites" means it's HIGH priority. Or maybe, "No web dev would ever try to pass this invalid value in" means its LOW priority.

Maybe people have already had this conversation and I'm not in the loop.

Anyone?

-John

Received on Wednesday, 10 May 2017 17:30:43 UTC