Re: CogAI vs. sparql or CogAI + Sparql/GraphQL?

Hello,

SPARQL is a semantic query standard for querying RDF. You would use it if
your representation is in RDF so you have a query plan + inference.

CogAI could facilitate for a natural language to sparql or even sql e.g
text2sql. This would be an aspect of natural language generation. You could
use deep learning models for this similar to seq2seq, transformers, etc.
Examples for NLG solution would be from Arria NLG. In general, an NLG would
require three core components - text plan, discourse plan, and surface
realization. You could refer to ehud reiter on NLG publications.
The major component that would facilitate such processing would the
representation embedded in an abstraction in cognitive modeling - what ever
form that is defined in whether that be an ontology, vectorized one-hot
encoding, etc.

Thanks,

Adeel






On Fri, 6 Jan 2023 at 17:28, Timothy Holborn <timothy.holborn@gmail.com>
wrote:

> Happy new year all; I hope we're able to make significantly positive
> progress during 2023.
>
> I've been writing documentation for an ecosystem solution. The
> documentation is being split into two parts and it's not really ready yet -
> indeed, the more notable area of work, on 'permissive commons technology'
> has only just been started.
>
> So, as I'm going through the process of defining this ecosystem's related
> methods; i was led to ponder what the future relationship might be between
> a query standard like sparql vs. CogAI.
>
> I think the easy answer is that 'they're different things', but I wasn't
> confident that the conclusion was 100% correct.  So, I thought I'd ask.
>
> - Might it be the case that CogAI could provide what is in-effect an
> alternative (or major update) to Sparql?
> - Or are they entirely different things - that compliment each-other, in a
> broader ecosystem?
>
> Therein - my intended meaning is not that it is feasible to create a
> useful system (at least not like what i'm working to create) that doesn't
> support sparql (and graphql fwiw); but that, when making entirely new
> systems (ie: permissive commons) I wonder to what extent a SPARQL like
> agent is required at all, if the work on CogAI is sufficiently supported to
> advance to some sort of future alternative, etc.
>
> Thereafter the inference being; how might CogAI provide a request
> language, for sending queries across a decentralised network of records
> stored in semweb related formats, etc...?
>
> perhaps alternatively also - how are CogAI related queries performed via
> sparql-family interfaces? or for that matter also, graphql?
>
> apologies if my questions are foolish...
>
> some other notes below - might simply be tl;dr,
>
> Cheers,
>
> Timothy Holborn.
>
> Few other notes....
>
> _______________________________________________________________________
> ChatAI (chat.openai.com)
>
> I provided ChatAI:
> https://github.com/w3c/cogai/blob/master/demos/decision-tree/rules.chk
>
> this was its response,
>
> It appears that this is an example of a chunk file written in the
> procedural knowledge representation language used by the CogAI system. This
> chunk file appears to contain a series of rules that describe how to make a
> decision about whether to play golf based on the weather conditions.
>
> The first rule states that if the outlook is sunny, the next state to
> consider is the humidity. The second and third rules describe how to make a
> decision based on the humidity: if the humidity is less than or equal to
> 75, the decision is to play golf, but if the humidity is greater than 75,
> the decision is not to play.
>
> The fourth rule states that if the outlook is cloudy, the decision is to
> play golf. The fifth rule states that if the outlook is rainy, the next
> state to consider is the wind. The final two rules describe how to make a
> decision based on the wind: if the wind is true, the decision is not to
> play, but if the wind is false, the decision is to play.
>
> I then asked it: "Can you write me an example of a chunk file that
> describes the procedural knowledge representation for how to make a
> decision about whether or not an action breached the UDHR as described by
> an ontology
>
> and it provided this response,
>
> udhr {state start} =>
>     udhr {state action}
>
> udhr {state action; action ?a} =>
>     udhr {state rights},
>     ontology {@query rights violated by ?a}
>
> udhr {state rights; rights ?r} =>
>     udhr {state done},
>     console {@do show; value "Action breached UDHR: ?r"}
>
> udhr {state rights; rights none} =>
>     udhr {state done},
>     console {@do show; value "Action did not breach UDHR"}
>
> This chunk file contains four rules. The first rule states that when the
> system is in the "start" state, it should move to the "action" state. The
> second rule states that when the system is in the "action" state, it should
> query the ontology to determine which rights are violated by the given
> action. The third rule states that if any rights are returned by the
> ontology, the decision is that the action breached the UDHR, and the fourth
> rule states that if no rights are returned, the decision is that the action
> did not breach the UDHR.
>
> _______________________________________________________________________
>
> noting - it appears; that whilst ChatAI knows about rww/solid (and is
> helpful when working on related software); it doesn't know about CogAi.
> although maybe, its just an area of technology that its got rules that
> restrict the usefulness of it for this type of purpose.  idk.
>
> I'm looking forward to doing more experimentation soon.... and finding
> alternatives (whilst seeking to build tooling for decentralised
> alternatives).
>
> FWIW: the website connected to DID:Health was an interesting find,
>
> https://github.com/w3c/did-spec-registries/commit/347608e6903fd2c9cc867e6949a86f615f1db266
>
> (FWIW: I'm still not sure whether DIDs will end-up being useful for the
> ecosystem I've been working on, but that's a tangent...  )
>
> cheers,
>
> Tim.h.
>
>

Received on Sunday, 8 January 2023 07:19:21 UTC