Re: CogAI vs. sparql or CogAI + Sparql/GraphQL?

to decentralise and support human agency, i think part of what's required
is 'some-thing' that is able to be 'owned' by the individual, like a vault
/ private database;

https://devdocs.webizen.org/SocialFactors/SocioEconomicTechReqs/SupportForHumanAgency/

as is distinct to other 'commons'.

Large ML Models requires a large number of machines designed for AI to
train,

https://devdocs.webizen.org/SocialFactors/CommercialModels/HardwareStrategy/

Once the models are trained; they're often still too big to operate easily
on a local machine - even if its a monster (ie: webizenPro).

Its more likely that there's an array of more targeted models, that have
specific applications; but this is still distinct from broader 'permissive
commons' and in-turn also, the personal AI / DB ('webizen') that enables
human centric AI,

https://devdocs.webizen.org/SocialFactors/WebScience/SafetyProtocols/HumanCentricAI/

I think the concept of cognitive databases is very interesting...  i'll
follow-up once i've done more work (the permissive commons stuff, requires
alot more work before its got any sense about it).

Tim.H.


On Mon, 9 Jan 2023 at 19:50, Dave Raggett <dsr@w3.org> wrote:

> Defining precise queries on databases is demanding as you need to
> understand the data models, and query models. SQL and SPARQL are hard to
> work with and require software skills associated with the pony tails in the
> IT department.
>
> My experiments with plausible reasoning goes beyond the limitations of
> RDBMS and RDF to address imperfect and imprecise knowledge, by mimicking
> human argumentation, something that philosophers have worked on since the
> days of Ancient Greece, and which has been largely ignored by computer
> science. This points to the emergence of cognitive databases and new and
> more flexible business processes.
>
> However, if you are in business and are not a proficient computer
> programmer, then it would be helpful if the computer could serve as a
> cooperative collaborator for problem solving involving a process of
> iterative refinement. You describe what you are seeking to achieve in
> imprecise terms, perhaps using a few examples, and the computer provides
> results and suggestions in return, including asking you questions to
> clarify your needs.
>
> Large language models, including ChatGPT are intriguing, but shallow in
> their ability to understand and reason. We still have plenty of fundamental
> research to do on memory, reasoning and learning, but already have a rich
> understanding of what’s needed to build cooperative intelligent agents.
>
> — Dave
>
> On 8 Jan 2023, at 21:07, Timothy Holborn <timothy.holborn@gmail.com>
> wrote:
>
> Hi Adeel,
>
> Generally, yup. & Understand.
>
> Yet the wierd idea, that I haven't really got a grasp on properly yet; is,
> whether there's an opportunity to define something that's between natural
> language and sparql...
>
> Most don't use or support sparql. Generally, interfaces are in graphql.
>
> Sparql is fairly old / mature, yet seemingly less desirable for many.
>
> Which led me to think; is there an opportunity to define a "AI query
> language" or "cognitive agent query language", as a type of upgrade path
> from sparql?
>
> More broadly; I think it's important to democratise AI. Thereafter, HTML
> is fairly easy to write; yet, I suspect the barriers for many to get to a
> point of competency for managing sparql related systems is probably fairly
> high. So, whose going to program a persons smart home?
>
> Or be the primary influencer in how AI influences a person's life?
>
> Thereafter whilst the development of sparql is likely/perhaps the precise
> query input/output, perhaps there's something that's between coding for a
> natural language interface & coding to sparql.
>
> Something that's got a command structure, yet equally, doesn't require the
> precision otherwise necessary for traditional query languages..   idk.
>
> But hope that illustrates the gist of the idea. (Without seeking to claim
> that it's either refined or necessarily relevant. As noted, idk).
>
> I guess, in part; chat.openai.com publically demonstrates an ability to
> use natural language to generate sparql queries, ontology, etc.  It isn't
> precise and often provides code that's got problems. Yet, the interface
> illustrates an intuitive query interface that seems to be at the opposite
> end of the spectrum to sparql.
>
> Tim.h.
>
> On Sun, 8 Jan 2023, 5:19 pm Adeel, <aahmad1811@gmail.com> wrote:
>
>> Hello,
>>
>> SPARQL is a semantic query standard for querying RDF. You would use it if
>> your representation is in RDF so you have a query plan + inference.
>>
>> CogAI could facilitate for a natural language to sparql or even sql e.g
>> text2sql. This would be an aspect of natural language generation. You could
>> use deep learning models for this similar to seq2seq, transformers, etc.
>> Examples for NLG solution would be from Arria NLG. In general, an NLG
>> would require three core components - text plan, discourse plan, and
>> surface realization. You could refer to ehud reiter on NLG publications.
>> The major component that would facilitate such processing would the
>> representation embedded in an abstraction in cognitive modeling - what ever
>> form that is defined in whether that be an ontology, vectorized one-hot
>> encoding, etc.
>>
>> Thanks,
>>
>> Adeel
>>
>>
>>
>>
>>
>>
>> On Fri, 6 Jan 2023 at 17:28, Timothy Holborn <timothy.holborn@gmail.com>
>> wrote:
>>
>>> Happy new year all; I hope we're able to make significantly positive
>>> progress during 2023.
>>>
>>> I've been writing documentation for an ecosystem solution. The
>>> documentation is being split into two parts and it's not really ready yet -
>>> indeed, the more notable area of work, on 'permissive commons technology'
>>> has only just been started.
>>>
>>> So, as I'm going through the process of defining this ecosystem's
>>> related methods; i was led to ponder what the future relationship might be
>>> between a query standard like sparql vs. CogAI.
>>>
>>> I think the easy answer is that 'they're different things', but I wasn't
>>> confident that the conclusion was 100% correct.  So, I thought I'd ask.
>>>
>>> - Might it be the case that CogAI could provide what is in-effect an
>>> alternative (or major update) to Sparql?
>>> - Or are they entirely different things - that compliment each-other, in
>>> a broader ecosystem?
>>>
>>> Therein - my intended meaning is not that it is feasible to create a
>>> useful system (at least not like what i'm working to create) that doesn't
>>> support sparql (and graphql fwiw); but that, when making entirely new
>>> systems (ie: permissive commons) I wonder to what extent a SPARQL like
>>> agent is required at all, if the work on CogAI is sufficiently supported to
>>> advance to some sort of future alternative, etc.
>>>
>>> Thereafter the inference being; how might CogAI provide a request
>>> language, for sending queries across a decentralised network of records
>>> stored in semweb related formats, etc...?
>>>
>>> perhaps alternatively also - how are CogAI related queries performed via
>>> sparql-family interfaces? or for that matter also, graphql?
>>>
>>> apologies if my questions are foolish...
>>>
>>> some other notes below - might simply be tl;dr,
>>>
>>> Cheers,
>>>
>>> Timothy Holborn.
>>>
>>> Few other notes....
>>>
>>> _______________________________________________________________________
>>> ChatAI (chat.openai.com)
>>>
>>> I provided ChatAI:
>>> https://github.com/w3c/cogai/blob/master/demos/decision-tree/rules.chk
>>>
>>> this was its response,
>>>
>>> It appears that this is an example of a chunk file written in the
>>> procedural knowledge representation language used by the CogAI system. This
>>> chunk file appears to contain a series of rules that describe how to make a
>>> decision about whether to play golf based on the weather conditions.
>>>
>>> The first rule states that if the outlook is sunny, the next state to
>>> consider is the humidity. The second and third rules describe how to make a
>>> decision based on the humidity: if the humidity is less than or equal to
>>> 75, the decision is to play golf, but if the humidity is greater than 75,
>>> the decision is not to play.
>>>
>>> The fourth rule states that if the outlook is cloudy, the decision is to
>>> play golf. The fifth rule states that if the outlook is rainy, the next
>>> state to consider is the wind. The final two rules describe how to make a
>>> decision based on the wind: if the wind is true, the decision is not to
>>> play, but if the wind is false, the decision is to play.
>>>
>>> I then asked it: "Can you write me an example of a chunk file that
>>> describes the procedural knowledge representation for how to make a
>>> decision about whether or not an action breached the UDHR as described by
>>> an ontology
>>>
>>> and it provided this response,
>>>
>>> udhr {state start} =>
>>>     udhr {state action}
>>>
>>> udhr {state action; action ?a} =>
>>>     udhr {state rights},
>>>     ontology {@query rights violated by ?a}
>>>
>>> udhr {state rights; rights ?r} =>
>>>     udhr {state done},
>>>     console {@do show; value "Action breached UDHR: ?r"}
>>>
>>> udhr {state rights; rights none} =>
>>>     udhr {state done},
>>>     console {@do show; value "Action did not breach UDHR"}
>>>
>>> This chunk file contains four rules. The first rule states that when the
>>> system is in the "start" state, it should move to the "action" state. The
>>> second rule states that when the system is in the "action" state, it should
>>> query the ontology to determine which rights are violated by the given
>>> action. The third rule states that if any rights are returned by the
>>> ontology, the decision is that the action breached the UDHR, and the fourth
>>> rule states that if no rights are returned, the decision is that the action
>>> did not breach the UDHR.
>>>
>>> _______________________________________________________________________
>>>
>>> noting - it appears; that whilst ChatAI knows about rww/solid (and is
>>> helpful when working on related software); it doesn't know about CogAi.
>>> although maybe, its just an area of technology that its got rules that
>>> restrict the usefulness of it for this type of purpose.  idk.
>>>
>>> I'm looking forward to doing more experimentation soon.... and finding
>>> alternatives (whilst seeking to build tooling for decentralised
>>> alternatives).
>>>
>>> FWIW: the website connected to DID:Health was an interesting find,
>>>
>>> https://github.com/w3c/did-spec-registries/commit/347608e6903fd2c9cc867e6949a86f615f1db266
>>>
>>> (FWIW: I'm still not sure whether DIDs will end-up being useful for the
>>> ecosystem I've been working on, but that's a tangent...  )
>>>
>>> cheers,
>>>
>>> Tim.h.
>>>
>>>
> Dave Raggett <dsr@w3.org>
>
>
>
>

Received on Monday, 9 January 2023 10:03:57 UTC