Re: Today's talk on defeasible reasoning and AGI

> On 8 Feb 2024, at 21:13, Melvin Carvalho <melvincarvalho@gmail.com> wrote:
> 
> Comments/questions
> 
> 1. I know what chain of thought is, but what is type 2?

This is explained on slides 17 and 18.

> 2. Any thoughts on orchestration of all these agents

You may want to expand on what you mean by that.

To be good co-workers with humans, agents will need to be sociable, have a good grasp of the theory of mind, and the ability to learn and apply behavioural norms for interaction.  I helped lead a workshop on behavioural norms last year at a Dagstuhl Seminar 23081, and see also Dagstuhl Seminar 23151.

> 3. Minor: "More like alchemy than science – but early days yet!" this comment caught my eye.  I assume it was tongue in cheek, but would be intrigued if you were inclined to expand on that. 

Others have said this before me. We still don’t have a deep understanding of how large language models are able to represent and manipulate knowledge and provide the results they do.  The output of a large language model is entirely determined by the output from the encoding block. How can the richness of the semantics for a given response be represented in a single vector?

Bill Gates chatted with Sam Altman (Open AI CEO) in a recent podcast, and they both agreed that a better (mathematical) understanding would enable smaller more effective models. They didn’t talk about the details though.


Dave Raggett <dsr@w3.org>

Received on Friday, 9 February 2024 09:09:47 UTC