W3C home > Mailing lists > Public > public-css-archive@w3.org > May 2017

Re: [csswg-drafts] Exposing Implementation Status

From: Sebastian Zartner via GitHub <sysbot+gh@w3.org>
Date: Sat, 27 May 2017 15:00:10 +0000
To: public-css-archive@w3.org
Message-ID: <issue_comment.created-304457419-1495897207-sysbot+gh@w3.org>
Note that MDN already started to turn its browser compatibility data into a
JSON format. This project is maintained on GitHub:

https://github.com/mdn/browser-compat-data/

This project is still at the beginning, though its target is to store all
implementation information available on MDN in the end.

Sebastian

On 27 May 2017 at 01:42, Amelia Bellamy-Royds <notifications@github.com>
wrote:

> When the Web Platform Docs
> <https://www.webplatform.org/docs/Main_Page/index.html> project was
> active, there was work done on importing MDN implementation data. I'm not
> sure whether it was a one-time import or an API, but I know it involved
> converting all the MDN tables into JSON, with some efforts at clean-up and
> standardization. I know the plan was that the WPD data would be available
> via an API. The code probably still exists in an abandoned GitHub repo, if
> someone wants to go looking for it.
>
> The longer term goal of that project was to integrate full Web Platform
> Tests data into the support tables. But I don't think work on that got very
> far.
>
> Also, some general thoughts from issues that came up during that project:
>
>    -
>
>    "support" for a feature is not a very precise term. Different
>    references use different levels of granularity.
>    - CanIUse looks at major features as a whole. Many CanIUse tables
>       equate to a complete CSS spec, or a large portion of it. The tables usually
>       warn when key functionality is missing or there are major bugs, but they
>       don't look at all the little edge cases and interactions. There's no easy
>       way to query the data to find a specific sub-feature's support level.
>       - MDN looks at individual objects in the language. For CSS, that's
>       mostly individual properties. Differences in support for particular values
>       on a property are noted in sub-tables. But again, you're not going to have
>       a lot of data about edge cases and bugs.
>       - Tree-walking and other ways of identifying whether language
>       objects are recognized in the browser (e.g. whether the parser recognizes a
>       CSS property/value or whether the JS global environment has a particular
>       object declared) won't test whether the functionality is implemented
>       correctly and completely.
>       - Spec tests (e.g. WPT) are much more fine-grained. But the data
>       can be much more difficult to interpret. What does x% test fail mean, for
>       practical developer use? To really be useful, you need to be able to map
>       tests to spec features, and identify which tests are testing core
>       functionality and which are testing edge cases, and then create summary
>       statistics.
>    -
>
>    Human-curated data (e.g., CanIUse and MDN) can get out of date, or be
>    incomplete, with no easy way to identify the problems except to have
>    another human being review it carefully. Depending on who contributed the
>    data, they may have made more or less effort to test edge cases, bugs, or
>    interactions with other features.
>
> —
> You are receiving this because you are subscribed to this thread.
> Reply to this email directly, view it on GitHub
> <https://github.com/w3c/csswg-drafts/issues/1468#issuecomment-304409591>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/AA6h31K4jMZvI796sUYVCaNfOtso-NAQks5r92NzgaJpZM4NoIzE>
> .
>


-- 
GitHub Notification of comment by SebastianZ
Please view or discuss this issue at https://github.com/w3c/csswg-drafts/issues/1468#issuecomment-304457419 using your GitHub account
Received on Saturday, 27 May 2017 15:00:17 UTC

This archive was generated by hypermail 2.3.1 : Monday, 23 October 2017 10:12:54 UTC