- From: Dave Raggett <dsr@w3.org>
- Date: Sun, 25 Dec 2022 12:09:26 +0000
- To: Adeel <aahmad1811@gmail.com>
- Cc: Owen Ambur <owen.ambur@verizon.net>, AIKR Community Group W3C <public-aikr@w3.org>, Naval Sarda <nsarda@epicomm.net>, "pradeep.jain@ictect.com" <pradeep.jain@ictect.com>, Gayanthika Udeshani <gayaudeshani@gmail.com>
- Message-Id: <F751C256-9E30-42F6-8354-A13988A22545@w3.org>
Not that dumb as it is able to create short, and for the most part effective, summaries from diverse sources of text, showing that it has constructed higher level latent semantic models of the meaning on a vast range of topics. For humans, distilling the salient points from many documents is hard work. Mimicry is an important part of human intelligence. > On 25 Dec 2022, at 11:21, Adeel <aahmad1811@gmail.com> wrote: > > Hello, > > It is basically a dumb mimicry model trained on large amounts of data with a lot of marketing hype. > > Thanks, > Adeel > > On Sun, 25 Dec 2022 at 10:16, Dave Raggett <dsr@w3.org <mailto:dsr@w3.org>> wrote: >> >> >>> On 23 Dec 2022, at 15:54, Owen Ambur <owen.ambur@verizon.net <mailto:owen.ambur@verizon.net>> wrote: >>> >>> Apparently, ChatGPT, which calls itself "Assistant" >> >> ChatGPT is a version of GPT3 that has been further trained to provide more meaningful answers. >> >> "Training language models to follow instructions with human feedback" >> https://arxiv.org/pdf/2203.02155.pdf >> >> The underlying technology is based upon training a neural network to generate text that is consistent with the prompt. Lower layers of the network deal with syntactic knowledge, whilst upper layers deal with semantic knowledge, see: >> >> Interpretable semantic representations from neural language models and computer vision, Steven Derby, 2022 >> https://pure.qub.ac.uk/en/studentTheses/interpretable-semantic-representations-from-neural-language-model >> >> Once the system is trained, thereafter its knowledge is fixed. Its working memory is the current activation levels of the artificial neurons in its network. This is all System 1, i.e. intuitive, opaque and lacking the ability to reason introspectively. The next evolutionary step will be to integrate support for System 2 by extending the network architecture to support rules that act on working memory (the latent semantics). Related challenges include the need to overcome catastrophic forgetting, and how to enable continuous learning involving a mix of working memory, short term memory and long term memory, mimicking the hippocampus and cortex. >> >> Sadly the media reports of ChatGPT, and DALL-E, etc. lack depth and critical understanding of the technical challenges. On the plus side, there are plenty of opportunities for further work, and progress is possible with the modest hardware available to most researchers. >> >> Dave Raggett <dsr@w3.org <mailto:dsr@w3.org>> >> >> >> Dave Raggett <dsr@w3.org>
Received on Sunday, 25 December 2022 12:09:41 UTC