- From: David Mason <vid_w3c@zooid.org>
- Date: Mon, 6 Nov 2023 09:52:03 -0500
- To: Aron Homberg <aron.homberg@mailbox.org>
- Cc: Melvin Carvalho <melvincarvalho@gmail.com>, Vivien Kraus <vivien@planete-kraus.eu>, public-solid <public-solid@w3.org>
Hi Aron, nice to meet you. On Sun, Oct 29, 2023 at 01:07:38AM +0800, Aron Homberg wrote: > I was also thinking about implementing the spec myself aka. "If you can > build it, you truly understood it"..., but given the size of of the spec, > it seems like quite alot of work. Having a defined set of features for an > MVP-style Solid server would be much appreciated. > The tech-set I'm thinking about is Astro + TypeScript + React for the > frontend, and the backend implemented with Node.js + TypeScript in a more > functional and "serverless" architecture (lamba functions, basically, and > as horizontally scalable as possible, even though this is probably not > necessary atm.; just as a fancy design goal). The impl. I imagine would be > modern, less complex and able to be one-click deployed & hosted on Vercel, > Netlify & co. with a single click (fork on Github, depoy via cloud based > CI/CD), and for free (for personal use at least). > I think something like a Lite spec + most simple impl. could maybe also > attract a wider developer community... > However, I'd like to suggest that such a Lite spec should better not > derail from the main spec too much but rather just pick the important > parts (if thats even feasible), if it is intended to be compatible with > existing implementations. "Derailing" would probably create chaos and > effectively become a spec fork, as soon as the diff is too large. "Lite" > implementations would then become non-interoperable with NSS, CSS etc. > The test suite is pretty amazing, I must say. If defining the "Lite" > subset of the spec would start with marking the necessary paragraphs with > a tag and simply providing only the relevant subset of the tests as a > "lite" testsuite subset, it would be a pretty straight-forward and > pragamatic approach that I'm sure, would help developers like me, > navigating the most important parts. I thought Melvin did a pretty good job of condensing it. I am inching toward a backburner/corner of desk implementation. So I wouldn't expect fast progress. But there might be some useful overlap. I would take the lead from Melvin's Javascript implementation as much as possible. My focus is very specificallly high level specification/test driven, and adding functionality to a core through interfaces, so that the results are highly focused, reusable and not bound to any environment (serverless is an planned target; right now it supports local and Azure storage, my current work is re-using test scripts for load tests). In this approach, implementations are bundled with BDD "steppers," which can be mapped to specification documents for accessible tests and functionality. I work for a government, and am trying to create a way forward that is responsibly transparent, educational even (on the principle of full and informed consent), and does not bind to any environment (local, cloud, etc). Interesting in this approach is that it's well suited to "AI" team members; a person writes the spec, which creates a basis for the AI to write BDD tests, code, unit tests, all of which people and AI can refine in a test based iterative workflow that results in versioned specifications, high level tests, and environment neutral code with their own unit tests. It still requires expertise to specify and evaluate the work, but contemporary AI can be leveraged in a responsible way that builds out the offering. Still, there is a lot to work out in even a Solid lite approach, especially strict data definition. I don't want to clog the list with side-ideas, so will write you separately. But I wanted to ask the community, rather than everyone creating their own front end applications, which may create corner cases, is there any reference Solid client? It's nice to have something hands on. As well, is there a subset, or a way to manage expected fails for the existing Karate tests that can be easily run against new implementations? David
Received on Monday, 6 November 2023 14:52:09 UTC