Re: Towards Solid Lite

David

But I wanted to ask the community, rather than everyone creating their own
> front end applications, which may create corner cases, is there any
> reference
> Solid client?  It's nice to have something hands on. As well, is there a
> subset, or a way to manage expected fails for the existing Karate tests
> that
> can be easily run against new implementations?


There is a mechanism in the Karate-based test suite that could help you run
a subset of tests for a Solid Lite implementation. When you use the docker
image it comes with a test manifest which runs all available tests against
the Solid Protocol spec. However, you can mount your own test manifest into
the image to change this behaviour. This would allow you to create a
smaller manifest which only runs the tests you were interested in. Since it
would be loading the current solid spec it would show you which
requirements were untested. However, it could also reference a different
version of the spec as long as the requirements were still annotated with
RDFa so that the Conformance Test Harness can load the spec & manifest in
order to build a list of tests to run. Then the report would only show
results for requirements defined in that version of the spec.
*
https://github.com/solid-contrib/specification-tests/blob/main/README.md#creating-a-script-for-a-ci-workflow
- how to run tests with your own manifest/spec
*
https://github.com/solid-contrib/specification-tests/blob/main/README.md#test-manifest
- more about the test manifest
*
https://github.com/solid-contrib/specification-tests/blob/main/README.md#annotations
- how to create your own list of requirements in Turtle instead of relying
on a spec with RDFa annotations, this would allow you to extract a subset
of requirements for testing even before they are gathered into their own
spec

Pete


On Fri, 10 Nov 2023 at 02:59, Jesus Noland <jesusnoland@gmail.com> wrote:

> Would you be willing to share a link to the Python implementation?
>
> On Mon, Nov 6, 2023 at 7:20 AM Melvin Carvalho <melvincarvalho@gmail.com>
> wrote:
>
>>
>>
>> po 6. 11. 2023 v 15:52 odesílatel David Mason <vid_w3c@zooid.org> napsal:
>>
>>>
>>> Hi Aron, nice to meet you.
>>>
>>> On Sun, Oct 29, 2023 at 01:07:38AM +0800, Aron Homberg wrote:
>>> >    I was also thinking about implementing the spec myself aka. "If you
>>> can
>>> >    build it, you truly understood it"..., but given the size of of the
>>> spec,
>>> >    it seems like quite alot of work. Having a defined set of features
>>> for an
>>> >    MVP-style Solid server would be much appreciated.
>>> >    The tech-set I'm thinking about is Astro + TypeScript + React for
>>> the
>>> >    frontend, and the backend implemented with Node.js + TypeScript in
>>> a more
>>> >    functional and "serverless" architecture (lamba functions,
>>> basically, and
>>> >    as horizontally scalable as possible, even though this is probably
>>> not
>>> >    necessary atm.; just as a fancy design goal). The impl. I imagine
>>> would be
>>> >    modern, less complex and able to be one-click deployed & hosted on
>>> Vercel,
>>> >    Netlify & co. with a single click (fork on Github, depoy via cloud
>>> based
>>> >    CI/CD), and for free (for personal use at least).
>>> >    I think something like a Lite spec + most simple impl. could maybe
>>> also
>>> >    attract a wider developer community...
>>> >    However, I'd like to suggest that such a Lite spec should better not
>>> >    derail from the main spec too much but rather just pick the
>>> important
>>> >    parts (if thats even feasible), if it is intended to be compatible
>>> with
>>> >    existing implementations. "Derailing" would probably create chaos
>>> and
>>> >    effectively become a spec fork, as soon as the diff is too large.
>>> "Lite"
>>> >    implementations would then become non-interoperable with NSS, CSS
>>> etc.
>>> >    The test suite is pretty amazing, I must say. If defining the "Lite"
>>> >    subset of the spec would start with marking the necessary
>>> paragraphs with
>>> >    a tag and simply providing only the relevant subset of the tests as
>>> a
>>> >    "lite" testsuite subset, it would be a pretty straight-forward and
>>> >    pragamatic approach that I'm sure, would help developers like me,
>>> >    navigating the most important parts.
>>>
>>> I thought Melvin did a pretty good job of condensing it.
>>>
>>
>> Thank you, though it's only a week old and v0.0.1
>>
>>
>>>
>>> I am inching toward a backburner/corner of desk implementation. So I
>>> wouldn't
>>> expect fast progress. But there might be some useful overlap. I would
>>> take the
>>> lead from Melvin's Javascript implementation as much as possible.
>>>
>>
>> FWIW I made a full implementation in JS in a day.
>>
>> Someone approached me (not on this list) and did a full implementation in
>> python over the weekend.
>>
>> He is now already building his first solid app
>>
>>>
>>>
>>> My focus is very specificallly high level specification/test driven, and
>>> adding
>>> functionality to a core through interfaces, so that the results are
>>> highly
>>> focused, reusable and not bound to any environment (serverless is an
>>> planned
>>> target; right now it supports local and Azure storage, my current work is
>>> re-using test scripts for load tests).
>>>
>>> In this approach, implementations are bundled with BDD "steppers," which
>>> can be
>>> mapped to specification documents for accessible tests and functionality.
>>> I work for a government, and am trying to create a way forward that is
>>> responsibly transparent, educational even (on the principle of full and
>>> informed consent), and does not bind to any environment (local, cloud,
>>> etc)..
>>>
>>> Interesting in this approach is that it's well suited to "AI" team
>>> members; a
>>> person writes the spec, which creates a basis for the AI to write BDD
>>> tests,
>>> code, unit tests, all of which people and AI can refine in a test based
>>> iterative workflow that results in versioned specifications, high level
>>> tests,
>>> and environment neutral code with their own unit tests. It still requires
>>> expertise to specify and evaluate the work, but contemporary AI can be
>>> leveraged in a responsible way that builds out the offering.
>>>
>>> Still, there is a lot to work out in even a Solid lite approach,
>>> especially
>>> strict data definition.
>>>
>>> I don't want to clog the list with side-ideas, so will write you
>>> separately.
>>>
>>
>> It will be more productive to work in other areas until it gets mature,
>> and to v0.1.  I'm cautiously optimistic that it can reach v1.0 no later
>> than Big Solid 1.0 becomes a REC.
>>
>> While too early for the vast majority of this list, if there are intrepid
>> implementers that want to work in a constructive way we can continue
>> discussion off-list.
>>
>>>
>>>
>>> But I wanted to ask the community, rather than everyone creating their
>>> own
>>> front end applications, which may create corner cases, is there any
>>> reference
>>> Solid client?  It's nice to have something hands on. As well, is there a
>>> subset, or a way to manage expected fails for the existing Karate tests
>>> that
>>> can be easily run against new implementations?
>>>
>>> David
>>>
>>>
>>>
>
> --
> Jesús Noland
>

-- 
This e-mail, and any attachments thereto, is intended only for use by the 
addressee(s) named herein and may contain legally privileged, confidential 
and/or proprietary information. If you are not the intended recipient of 
this e-mail (or the person responsible for delivering this document to the 
intended recipient), please do not disseminate, distribute, print or copy 
this e-mail, or any attachment thereto. If you have received this e-mail in 
error, please respond to the individual sending the message, and 
permanently delete the email.

Received on Friday, 10 November 2023 11:14:01 UTC