Re: Cognitive agents as collections of modules

> On 11 Mar 2024, at 18:36, Timothy Holborn <timothy.holborn@gmail.com> wrote:
> 
> 
> On Tue, 12 Mar 2024 at 04:08, Dave Raggett <dsr@w3.org <mailto:dsr@w3.org>> wrote:
>> LLMs these days have hundreds of billions of parameters. There are techniques for reducing the computation cost for running the models.  Pre-training involves truly vast amounts of data. Fine tuning for applications is less expensive, involving tens of thousands of examples.
> 
> yup.
> 
> thought was moreover about the 'fine tuning' files / data - whether they could be stored separately from the underlying models...

There is a lot of attention at present to federated deep learning where training is delegated to devices at the end of the network without needing to transfer sensitive data to the cloud.  Each device learns on its own and uploads the changes in the model’s parameters for aggregation in the cloud.

When data is used to train generative AI, there is a risk that the data will be “memorised” and can be recovered with suitably crafted prompts. This necessitates careful data preparation to minimise risks of inadvertent inappropriate leakage. This very much applies to  federated deep learning!

A more robust approach is to keep sensitive data entirely separate from the AI model and to use retrieval augmented generation (RAG) where the user’s prompt is used to search a database, and the results used to construct the context prompt prepended to the user’s prompt before feed it to the generative AI.  In this approach sensitive information is only used by the AI at run time for the user it relates to.  A further level of security may be possible if the AI model can be distilled down to a size that can run at the edge.

Sensitive, i.e. confidential, information is just one challenge. Another is bias, as witness Google’s debacle with their latest image generator, see:

 https://time.com/6835975/google-gemini-backlash-bias/

Generative AI remains prone to problems with bias, distractions and hallucinations, along with being weak on logical reasoning and semantic consistency.  These problems require a radical rethink that will take considerable time to work through as there is so much we have yet to learn about cognition and learning.

> 
> noting also, https://time.com/6247678/openai-chatgpt-kenya-workers/ 
> 
> noting, obviously - if there's a way for that data to be kept private (and still then, be made able to work with the LLM model) - perhaps, via a solid pod or some other location / service...  then also, would be nice to have a standard way of storing that data... (then processing it for the LLM runtime)?  
> 
> (nb: haven't thought alot about it... might be a flawed notion, for some reason)

Dave Raggett <dsr@w3.org>

Received on Tuesday, 12 March 2024 09:05:23 UTC