W3C home > Mailing lists > Public > public-webplatform@w3.org > September 2014

Re: Upcoming Compatibility table update

From: PhistucK <phistuck@gmail.com>
Date: Sat, 13 Sep 2014 23:07:34 +0300
Message-ID: <CABc02_KaXnND=j6KM-Ls2YmkmFTBhWXacnX0=6+EWF9Nf=dFGg@mail.gmail.com>
To: Doug Schepers <schepers@w3.org>
Cc: Renoir Boulanger <renoir@w3.org>, List WebPlatform public <public-webplatform@w3.org>
Are we going to accept user contributions like "Chrome 37 - supported"
(meaning, without tests)?

And my other question still stands (though not urgent or actively relevant
at this point, I still want to get the idea)... what if tests say a feature
is supported in Chrome 37 and the user says otherwise (or the opposite)?
How will we handle conflicts?

I think I outlined my proposal at a high level. Are you looking for
something even more concrete? If so, I asked before - where is the code
that deals with writing and reading that file? After seeing the code or
understanding the process and the design, I can try and come up with a
concrete plan and execute it if it is approved.


☆*PhistucK*

On Sat, Sep 13, 2014 at 9:37 PM, Doug Schepers <schepers@w3.org> wrote:

> Hi, PhistucK–
>
> In addition to what Renoir said…
>
> On 9/12/14 8:27 AM, PhistucK wrote:
>
>> Looks reasonable.
>> Thank you!
>>
>> Perhaps off topic, but regarding the data itself -
>> 1. Is there a scheduled task that updates the data from the various
>> sources (only MDN at the moment, right?)?
>>
>
> No, there is no scheduled task yet.
>
> We don't plan to retrieve data from MDN again, for a few reasons:
>
> 1) The MDN data is not well-structured for extraction; I think they are
> working on this, but for now, it's not trivial to get the data, and it
> requires a good bit of post-processing;
>
> 2) Once we have the MDN data, and have normalized it, we still have to map
> the naming and category conventions to those of our own pages and site
> structure, which is not 1:1;
>
> 3) There is no regular update of their data, such that we could retrieve
> it on a schedule; this might change in the future, since they are working
> on their compatibility information;
>
> 4) The MDN data is not necessarily accurate; the intent is certainly
> there, but their compatibility results are not based on tests, but only on
> the judgment of their contributors; and where those contributors may used
> tests, the tests are not exposed to the reader for verification of the
> quality of the test;
>
> 5) The MDN data is not detailed enough; though they have some subfeatures
> listed, it's not systematic or consistent, and doesn't typically cover edge
> cases, or combination with other features.
>
> The MDN data was an excellent basis for a first start at our compat info,
> and we're glad it was there; we're grateful that they allowed us to use it.
>
> CanIUse.com has also offered the use of their data, which is more
> test-driven (and thus more accurate), is easily extracted, and which does
> have regular updates; however, at a feature level, it is organized more
> broadly than our pages, so the mapping between our page/category hierarchy
> and CanIUse is even more challenging then that of MDN. Still, it might be
> worth the effort, because of the other advantages.
>
> Ideally, we will use the W3C test-suite results data [1]. That also needs
> some normalization and adaption, but the value is high because it's
> extremely detailed and accurate information.
>
>
>  a. If so, are conflicts handled (user added data using pull requests)?
>> How?
>> b. If not, how is the data kept up to date and synchronized?
>>
>
> These will be issues, but not for MDN data, as mentioned above.
>
>
>  2. The data-human.json file is huge and cannot be edited in the web
>> interface of GitHub as a result. Can you split it to folders by topic
>> and files by names (or any other way that creates small files that can
>> be easily edited)? During the build (or whatever it is that processes
>> the data), everything could be combined and then processed.
>>
>
> That seems like a good idea. Do you have a proposal to handle the split
> (and the merge)?
>
>
> [1] https://github.com/w3c/web-platform-tests
>
> Regards-
> -Doug
>
>  ☆*PhistucK*
>>
>> On Fri, Sep 12, 2014 at 4:31 AM, Renoir Boulanger <renoir@w3.org
>> <mailto:renoir@w3.org>> wrote:
>>
>>     Hi all,
>>
>>     I am about to push an update[1] on our wiki and I thought i’d ask for
>>     text validation.
>>
>>     There’s nothing to see on the wiki or staging —I had no time to work
>> on
>>     a staging server yet— but you can look at the screenshots in the
>>     issue [0].
>>
>>
>>
>>     Wanna help with the text?:
>>
>>     * When no data found:
>>
>>     [[
>>     There is no data available for topic "%s", feature "%s". If you think
>>     that there should be data available, consider <a href="%s">opening an
>>     issue</a>.
>>     [[
>>
>>
>>     * When data found, to give advice how to help:
>>
>>     [[
>>     Do you think this data can be improved? You can ask to add by <a
>>     href="%s">opening an issue</a> or <a href="%s">make a pull
>> request</a>.
>>     ]]
>>
>>
>>
>>        [0]: https://github.com/webplatform/mediawiki/issues/17
>>        [1]:
>>     https://github.com/webplatform/mediawiki/compare/compatables-update
>>
>>     --
>>     Regards,
>>
>>     Renoir Boulanger  |  Developer operations engineer
>>     W3C  |  Web Platform Project
>>
>>     http://w3.org/people/#renoirbhttps://renoirboulanger.com/  ✪
>>     @renoirb
>>     ~
>>
>>
>>
>
>
Received on Saturday, 13 September 2014 20:08:41 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:21:03 UTC