- From: Melvin Carvalho <melvincarvalho@gmail.com>
- Date: Fri, 9 Feb 2024 10:42:07 +0100
- To: Dave Raggett <dsr@w3.org>
- Cc: public-cogai <public-cogai@w3.org>
- Message-ID: <CAKaEYhJPFDpaecuRK3R8yFtY21MmWBrovGShG9=RN7H=bi12iQ@mail.gmail.com>
pá 9. 2. 2024 v 10:09 odesílatel Dave Raggett <dsr@w3.org> napsal: > > > On 8 Feb 2024, at 21:13, Melvin Carvalho <melvincarvalho@gmail.com> wrote: > > Comments/questions > > 1. I know what chain of thought is, but what is type 2? > > > This is explained on slides 17 and 18. > Ah, you mean Kahneman type-2 rather than chain of thought type-2. Got it, thanks. > > 2. Any thoughts on orchestration of all these agents > > > You may want to expand on what you mean by that. > Scaling agents to work together requires some kind of cooperation or command and control when you start to deal with multiple agents. I think that's a common challenge in multi agent systems? Similar to the demo of the ants finding food, but at web scale. > > To be good co-workers with humans, agents will need to be sociable, have a > good grasp of the theory of mind, and the ability to learn and apply > behavioural norms for interaction. I helped lead a workshop on behavioural > norms last year at a Dagstuhl Seminar 23081, and see also Dagstuhl Seminar > 23151. > > 3. Minor: "More like alchemy than science – but early days yet!" this > comment caught my eye. I assume it was tongue in cheek, but would be > intrigued if you were inclined to expand on that. > > > Others have said this before me. We still don’t have a deep understanding > of how large language models are able to represent and manipulate knowledge > and provide the results they do. The output of a large language model is > entirely determined by the output from the encoding block. How can the > richness of the semantics for a given response be represented in a single > vector? > > Bill Gates chatted with Sam Altman (Open AI CEO) in a recent podcast, and > they both agreed that a better (mathematical) understanding would enable > smaller more effective models. They didn’t talk about the details though. > Makes sense. LLMs are not an exact science. > > > Dave Raggett <dsr@w3.org> > > > >
Received on Friday, 9 February 2024 09:42:25 UTC