Re: [csswg-drafts] Exposing Implementation Status

When the [Web Platform Docs](https://www.webplatform.org/docs/Main_Page/index.html) project was active, there was work done on importing MDN implementation data.  I'm not sure whether it was a one-time import or an API, but I know it involved converting all the MDN tables into JSON, with some efforts at clean-up and standardization.  I know the plan was that the WPD data would be available via an API.  The code probably still exists in an abandoned GitHub repo, if someone wants to go looking for it.

The longer term goal of that project was to integrate full Web Platform Tests data into the support tables.  But I don't think work on that got very far.

Also, some general thoughts from issues that came up during that project:

- "support" for a feature is not a very precise term. Different references use different levels of granularity. 
  * CanIUse looks at major features as a whole.  Many CanIUse tables equate to a complete CSS spec, or a large portion of it. The tables usually warn when key functionality is missing or there are major bugs, but they don't look at all the little edge cases and interactions.  There's no easy way to query the data to find a specific sub-feature's support level.
  * MDN looks at individual objects in the language. For CSS, that's mostly individual properties. Differences in support for particular values on a property are noted in sub-tables.  But again, you're not going to have a lot of data about edge cases and bugs.
  * Tree-walking and other ways of identifying whether language objects are recognized in the browser (e.g. whether the parser recognizes a CSS property/value or whether the JS global environment has a particular object declared) won't test whether the functionality is implemented correctly and completely.
  * Spec tests (e.g. WPT) are much more fine-grained.  But the data can be much more difficult to interpret.  What does x% test fail mean, for practical developer use? To really be useful, you need to be able to map tests to spec features, and identify which tests are testing core functionality and which are testing edge cases, and then create summary statistics.

- Human-curated data (e.g., CanIUse and MDN) can get out of date, or be incomplete, with no easy way to identify the problems except to have another human being review it carefully.  Depending on who contributed the data, they may have made more or less effort to test edge cases, bugs, or interactions with other features.


-- 
GitHub Notification of comment by AmeliaBR
Please view or discuss this issue at https://github.com/w3c/csswg-drafts/issues/1468#issuecomment-304409591 using your GitHub account

Received on Friday, 26 May 2017 23:42:48 UTC