Re: Solid Skills - AI skills for building Solid apps

On 2026-03-11 03:06, Jesse Wright wrote:
> The ODI team is also in early stages of putting together skills in 
> https://github.com/solid/solid-llm-skills <https://github.com/solid/ 
> solid-llm-skills>. I suggest aligning effort and making https:// 
> github.com/solid/solid-llm-skills <https://github.com/solid/solid-llm- 
> skills> the 'canonical' source for Solid skills.

I think the whole LLM skills initiative, as it is currently, raises 
significant concerns both for the ecosystem and the CG.

The documentation produced so far by different parties seems subjective, 
providing skills or instructions on how LLMs should support users and 
developers. From reading some of the samples shared in this thread, 
there is a lot of bias baked in.

Anything short of using the specification URLs or copying their contents 
is potentially problematic and noisy. The CG work items and 
specifications, as listed at https://solidproject.org/TR/ , serve as the 
canonical source for the group's output and views. This is not a debate.

Some of these specifications are highly structured, machine-readable 
documents, which should help AI assistants and LLMs greatly by 
extracting the most significant units of information.

These documents do not need to be summarised by anyone with their own 
perspective on what Solid specifications offer or do not. Adding 
personal interpretations introduces additional layers of 
misinterpretation on top of the content's inherent complexity. That is 
at least two layers of interpretation before the information even 
reaches a human reader.

What follows is code with built-in biases regarding what 
interoperability in Solid is supposed to be. Anything other than the 
specifications as the key input or even a CG's documentation, which is 
consensus-driven is bound to create further fragmentation in the 
ecosystem. There is absolutely no need to repeat the requirements or 
advisements or concepts or anything from the specifications. They 
already exist. Don't Repeat Yourself.

Here are some examples to illustrate the main point on bias (and the 
general point holds for other skills publications). I want to see how 
these serve as objective or useful information.

https://github.com/solid/solid-llm-skills/blob/main/solid/spec.md has a 
table comparing ACP and WAC, which states:

 >Spec status: (ACP) Active development - (WAC) Older, widely deployed

This is at best misleading. ACP is not in active development nor widely 
supported. If this point is being challenged, we can examine public 
records together. ACP was published on 2022-05-18, which is 4 years ago, 
and there has been no substantial implementation feedback or advancement 
in the specification since then. (Do folks really want to argue that ACP 
has been so flawless out in the wild that there was nothing to report? 
Or was information - implementation experience - withheld from from the 
CG? Let's revisit that topic another time.) Yet the table tries to 
present ACP as new and shiny and WAC as old.

What exactly constitutes "old" or "new"? What defines active or 
inactive? How are "unused" or "widely deployed" measured? "Older" is 
largely irrelevant. Relatively speaking, WAC has been in "active 
development" and is far simpler to implement. Not to mention a longer 
and publicly verifiable track record.

How does this information accurately or fairly characterise the 
situation other than reflecting what the authors wanted their tool to 
feed? We don't all need to adopt that perspective.

I understand there may be imperfections in the documentation for these 
skills, but that is precisely my point. If LLMs are so smart, maybe we 
don't need to regurgitate what the specifications are actually saying. 
We put a lot of effort into creating those specifications, and we should 
use them more directly.

In other areas for example with 
https://github.com/solid/solid-llm-skills/blob/main/solid/integration-guide.md 
:

 >You are an expert on integrating [..] You know the [..]

Again, that seems to have bias on the libraries because it is 
incomplete. The recommendations are based on preference, and if they are 
based on other criteria it should be stated and properly supported by 
evidence. If not, then this cuts into the job of the developers, which 
is selecting the preferred libraries based on whatever criteria. Why 
would one need to tell it what it is an "expert" on or what it "knows"? 
Does it need to get into character or be hypnotised to work properly?

So, I suggest we take care on how all this is communicated. If it 
represents CGs work, there should be consensus on how that is 
communicated. If it is a personal or organisational effort, and not 
representative of the CG, perhaps it should clearly marked as such.

-Sarven
https://csarven.ca/#i

Received on Wednesday, 11 March 2026 14:33:00 UTC