- From: Gregg Kellogg <gregg@greggkellogg.net>
- Date: Fri, 11 Mar 2022 10:09:05 -0800
- To: Manu Sporny <msporny@digitalbazaar.com>
- Cc: spec-prod@w3.org
Many/most RDF test suites use the Evaluation and Report Language (EARL) format [1] for reporting implementation conformance, which gets rolled up into a report such as that used for Turtle [2]. Not suprising that we like our test reporting to be queryable, and can use that to generate a useful HTML representation. It would be great if that presentation could look like caniuse. Gregg Kellogg gregg@greggkellogg.net [1] https://www.w3.org/TR/EARL10-Schema/ [2] https://dvcs.w3.org/hg/rdf/raw-file/default/rdf-turtle/reports/index.html > On Mar 10, 2022, at 6:13 AM, Manu Sporny <msporny@digitalbazaar.com> wrote: > > On 3/3/22 8:04 PM, Marcos Caceres wrote: >> What might be interesting is if there is an analogue for data specs? >> Presumedly data specs are also "implemented" and have similar conformance >> requirements for non-browser software (i.e., accompanying implementation >> reports that lists real software that correctly parses/processes the >> data). > > We're hitting some of these challenges in the Verifiable Credentials and > Decentralized Identifiers WGs. The ecosystem is getting big enough (40+ > implementers for DIDs, 20+ vendors implementing all things VCs), that the > market is starting to get confused over what they can safely use (it's a great > time to be a consultant! -- but that's a terrible solution to the problem). > > What we need is a caniuse.com-like dashboard (and we're actively building this > out... for the VC/DID ecosystem). We'd love to be able to pull that data into > the VC/DID specs, so please keep us in mind. > > We're currently trying to settle on a standard reporting format because each > test suite reports results in different ways and we have no hope of providing > something like caniuse.com if we don't standardize the data format for the > report. How did caniuse.com and wpt.fyi address this problem? > > So, all that to say -- we want to do the same thing, on a nightly basis, but > with data models, and separately, HTTP APIs. Just showing "the top 4 vendors" > is not an option for us (there's more competition in our space among vendors, > at present). > > The rest are just examples of what we're doing today, and we need to figure > out a way to get all of this into a unified "dashboard" (like caniuse.com): > > The Verifiable Credentials Data Model provides it's implementation reports in > this way: > > https://w3c.github.io/vc-test-suite/implementations/ > > The DID Data Model provides it's implementation reports in this way: > > https://w3c.github.io/did-test-suite/#spec-statement-summary > > We have NxN protocol-driven interop tests that do things like this: > > http://vaxcert-interop-reports.s3-website.us-east-2.amazonaws.com/#Polio > > No firm ideas on how to solve the problem yet, but would really appreciate > some guidance from those that have lived through the pain of this challenge. > > -- manu > > -- > Manu Sporny - https://www.linkedin.com/in/manusporny/ > Founder/CEO - Digital Bazaar, Inc. > News: Digital Bazaar Announces New Case Studies (2021) > https://www.digitalbazaar.com/ > >
Received on Friday, 11 March 2022 18:10:20 UTC