Re: Cognitive agents as collections of modules

On Tue, 12 Mar 2024 at 04:08, Dave Raggett <dsr@w3.org> wrote:

>
> On 11 Mar 2024, at 16:56, Timothy Holborn <timothy.holborn@gmail.com>
> wrote:
>
> https://g.co/gemini/share/f5e773916b42
>
>
> A post on LLM  model size:
>
> LLMs these days have hundreds of billions of parameters. There are
> techniques for reducing the computation cost for running the models.
> Pre-training involves truly vast amounts of data. Fine tuning for
> applications is less expensive, involving tens of thousands of examples.
>

yup.

thought was moreover about the 'fine tuning' files / data - whether they
could be stored separately from the underlying models...

noting also, https://time.com/6247678/openai-chatgpt-kenya-workers/

noting, obviously - if there's a way for that data to be kept private (and
still then, be made able to work with the LLM model) - perhaps, via a solid
pod or some other location / service...  then also, would be nice to have a
standard way of storing that data... (then processing it for the LLM
runtime)?

(nb: haven't thought alot about it... might be a flawed notion, for some
reason)


> LLMs have little in common with human cognition despite being trained to
> mimic our language and art.  The research challenge is how to close the
> gap, enabling smaller systems that can be widely deployed.  I am not sure
> we’re quite ready for the AGI toaster* in Red Dwarf, but there will be lots
> of valuable applications, just as there are lots of humans but few geniuses.
>

bblfish noted:
https://www.ted.com/talks/thomas_thwaites_how_i_built_a_toaster_from_scratch?language=en
long ago...

NB / FWIW: https://lmstudio.ai/


> Dave Raggett <dsr@w3.org>
>
> * See: https://www.quotes.net/show-quote/67028 and
> https://www.youtube.com/watch?v=LRq_SAuQDec
>
>
>

Received on Monday, 11 March 2024 18:37:14 UTC