Re: ChatGPT and ontologies

A friend was recently doing some experiments asking ChatGPT to generate some 
code, with the kind of mixed results you might expect.  I suggested a strategy 
of asking it first to generate test cases, then asking it to generate code.  
This seemed to work, though some of the test cases offered were blatantly wrong, 
in a way that was obvious to a human reader.

I'm wondering if this kind of strategy might apply to ontology- and data- 
generation?

#g


On 11/06/2023 21:40, Dan Brickley wrote:
> On Thu, 9 Feb 2023 at 11:44, Dave Reynolds <dave.e.reynolds@gmail.com> wrote:
>
>     There's already been some discussion here on ChatGPT and the extent to
>     which it can, or can't, do things like generate sparql queries and the
>     like; and people may be getting bored of the ChatGPT hype. However, in
>     case of interest, here's some notes on some lightweight playing with it
>     as an aid in writing simple ontologies:
>
>     https://www.epimorphics.com/writing-ontologies-with-chatgpt/
>
>     tl;dr You can generate simple, superficial examples with it but it's of
>     limited use for practical work atm, though tantalising close to being
>     useful. Certainly don't trust it to do any inference for you
>     (unsurprising). OTOH getting it to critique a trivial ontology (that it
>     generated) for coverage of a domain was much better - so as an aid to
>     generating checklists of features of a domain to consider during
>     modelling it _might_ be of more use, even as it stands.
>
>
> Thanks for sharing this. I know there is a tendency for people aligned with 
> Semantic Web to reject these technologies but in my view they bear close 
> scrutiny and are worth very serious attention. This doesn't mean we must like 
> everything about them, or they're the one road to [whatever].As a phenomena 
> this is an extraordinary turning point.
>
> This makes the ontology-authoring experiment quite interesting, since the 
> ground is shifting under our feet. As a community we have longstanding 
> debates, instincts, styles and differences on the question of how much to pull 
> into an explicit model, versus leave in scruffy text-valued fields (Dublin 
> Core vs FRBR, for example). So alongside using these new tools to help us 
> continue what we were doing before, they also raise questions about whether 
> new modeling habits will arise. The LLMs are better than anything prior at 
> unpacking the intent behind human prose - but at what point do we find they're 
> good enough to actually affect how we model things? Can we make ontologies 
> simpler and easier to use, without letting bias and weirdnesses creep in?
>
> Has anyone been experimenting with fine tuning in this context? SHACL/ShEx?
>
> > The step in the dialogues that really stands out, though, is when we asked 
> it to critique its own ontology. Its summary of features of organisations that 
> you might want to think about modelling was excellent.
>
> Very much agree on this point. Also wondering whether it could be useful as a 
> technology to make the more formal aspects of SW/RDF technology accessible to 
> non specialists (e.g. proofs, complex rules)...
>
> cheers,
>
> Dan
>
>
>     Dave
>
>
-- 
Graham Klyne
mailto:gk@ninebynine.org
http://www.ninebynine.org
Mastodon: @gklyne@indieweb.social
GitHub/Skype: @gklyne

Received on Wednesday, 14 June 2023 17:05:06 UTC