- From: Dave Reynolds <dave.e.reynolds@gmail.com>
- Date: Sun, 11 Jun 2023 18:56:22 +0100
- To: Michael Schneider <m_schnei@gmx.de>
- Cc: W3C SWIG Mailing-List <semantic-web@w3.org>
Hi Michael, Thanks for the update. I indeed used a GPT-3.5 era version and hadn't yet tested ChatGPT-4, though not really surprising you found the same results. Sad that the Wolfram Alpha plugin didn't help, I'd have been more hopeful of that having an effect. > Btw, I also tried with Google Bard. When I asked it to "generate an > ontology to describe organisations...", it provided me with an output > that looked strikingly familiar to me. At the end, it listed a single > entry under "Sources" - which happened to be your blog post. So I left > it at that. :-) On the one hand that's very amusing :) On the other hand it's neat example of one of the dangers many people have pointed out. As more content on the web becomes generated by chat bots the next iteration of models are in danger of being contaminated by the content from the previous generation models. Cheers, Dave On 09/06/2023 18:32, Michael Schneider wrote: > Hi Dave! > > On 09.02.2023 12:43, Dave Reynolds wrote: > > There's already been some discussion here on ChatGPT and the extent to > > which it can, or can't, do things like generate sparql queries and the > > like; and people may be getting bored of the ChatGPT hype. However, in > > case of interest, here's some notes on some lightweight playing with it > > as an aid in writing simple ontologies: > > > > https://www.epimorphics.com/writing-ontologies-with-chatgpt/ > > I'm a little late, but I found your blog post quite interesting and a > good starting point for own experiments. I believe, given the date of > your post, you must have been using a version of ChatGPT that was based > on GPT-3.5. So I was curious to see whether the current version of > ChatGPT, based on GPT-4 (using ChatGPT+ version of May 24 2023), would > produce better results than the older version. > > In sum, ChatGPT-4 produced pretty much the same output as in your > experiments, when reapplying some of your original prompts. It generated > a simple RDFS-style vocabulary, even if explicitly asked to create an > OWL ontology (modulo the use of some OWL terms). It also created a > "name" property with domains for different classes. And it would again > not be able to see the semantic side effects from the multiple domains > for the "name" property. In fact, it insisted that there are none and > that it was "good practice" to do the way it did to "keep the ontology > clear and avoid any confusion". Here is the full transcript: > > https://pastebin.com/qZkzVzNa > > Apart from the offline version of ChatGPT-4, I also tried with the > Wolfram Alpha plugin enabled, to see if it would make any attempt to use > the plugin for applying correct reasoning. It didn't, and there were no > significant differences to the results from the offline version. Here is > the transcript: > > https://pastebin.com/ULySeZ2g > > Btw, I also tried with Google Bard. When I asked it to "generate an > ontology to describe organisations...", it provided me with an output > that looked strikingly familiar to me. At the end, it listed a single > entry under "Sources" - which happened to be your blog post. So I left > it at that. :-) > > Cheers, > Michael >
Received on Sunday, 11 June 2023 17:56:30 UTC