Re: Different kinds of memory

https://github.com/SynaLinks/HybridAGI

From:

https://www.linkedin.com/posts/year-of-the-graph_knowledgegraph-ai-llm-activity-7216756297533186049-vX2e




On Fri, 12 July 2024, 12:58 am Timothy Holborn, <timothy.holborn@gmail.com>
wrote:

> perhaps checkout https://cohere.com/  &  https://lmql.ai/
>
> IMHO there's some bigger issues.  As TimBL would put it, at the "social"
> layers.  which in-turn ties back to nuanced considerations related to the
> question....
>
> NB:
> https://docs.google.com/spreadsheets/d/1jDLieMm-KroKY6nKv40amukfFGAGaQU8tFfZBM7iF_U/edit?usp=drivesdk
>
> I'll add more to it over the next few days, then perhaps flag it with
> you...  The langchain stuff is useful, but it's still overall complicated
> to set-up. Some of the 'agents' examples are useful too, but the approach
> is different to my historical approach  - that I haven't been able to
> advance very well..   I think the desire is for a pervasive
> surveillance ecosystem, tracking every keystroke - then censoring the bad
> stuff certain 'castes' of society engage in doing...   harming others,
> whilst benefiting for doing so...  so, bit depressed atm.
>
> i've been a bit miserable...  here's a music playlist;
> https://www.youtube.com/watch?v=NucJk8TxyRg&list=PLCbmz0VSZ_vponyiYMLdoJ_gGmA-6iwG_
>
>
> Whilst I've been doing some work in the area - but, I need an LLM Machine
> - so, waiting on that really..  thinking I might change my life and focus
> on art creation or something that leads to income; I've done alot for human
> rights support, anyhow.. imho, hasn't worked out; and,  I don't want to go
> into it now.
>
> imho; one of the purposes of DIDs &
> https://docs.google.com/document/d/1Fwx3-YYyKgeigaScoMVoTFc3V2p-0jVwOg0IvMr8TZs/edit#heading=h.9mam9vryntlt
> (note use of HDF5 containers); amongst other things, was about
> decentralising commons infrastructure from a technical perspective - using
> various different DLTs (depending on the characteristics needed, different
> protocols suit - also, nothing 'standard' like http, certainly not then -
> and blockchains can be centralised in ways different to say - CDNs... );
> therein, 'commons' could be merely between two people (ie: lifecycle of a
> relationship) or far broader (ie: laws in jurisdictions); therein, the
> software agent for the natural agent(s) needs to take into account the
> n-dimensionality of the status of knowledge of the natural agents involved
> as observers, temporally, in experiences.
>
> i probably haven't been as clear as i could have been; noting, w3c work
> was thought of as getting the royalty free patent-pool supported
> 'thoughtware' tooling components, to ensure people could own the software
> prosthetic of self - rather than companies, or government, or whoever else
> wants to have their hooks into it - like its a new form of slavery that'll
> help them make money, long before anyone knows what to do about it; at
> which point, they'd be unlikely to be penalised - which - has overall been
> shown to be true.  I"ve made attempts to produce some basic initial
> tooling, as a web-extension, basically - to support social-web foundations,
> but am finding it too hard...  but that could be a way of going, perhaps
> with the solid cg or indeed also, the rww cg - yet, seems to be entirely
> discouraged...
>
> so, given that's seemingly the case;  been looking at what exists and how
> it works,
>
> LLMs don't appear to understand time - so, i've been doing some
> experiments using characters from films - as LLMs know much more about the
> worlds described by films / tv - whether it be contagion or star-trek,
> which enables a means to in-effect, engage a sort of 'pointed graph' within
> the LLM - with relatively short prompts - understanding that, they've been
> defined in a way that seeks to ensure they're not sued for copyright
> infringement, etc...  which means outcomes to prompts, etc.  might look
> like something out of the scripts of media like contagion, but have enough
> differences that makes it look 'new'...  thereafter, the need to do more
> research on local systems as to get a better grasp on the science of it.
>
> I"ve tried prompting systems using RDF with directives - sometimes it
> works, sometimes not - seemingly, they prefer json - can provide the
> outputs if desired..
>
> but, in a thin-client world, where people are defined via a shared private
> key - in-effect, helping to pay for compute by purchasing the machinery
> needed to contribute towards the systems then used by others;
>
> https://www.youtube.com/watch?v=qZiThp3CTyw
>
> towards a world where 'ai takes all the jobs' requiring these people to
> look forward to universal basic income - as the definition of work changes,
> or at least the terms associated with the notion...
>
> What are the political requirements for 'memory'?
>
> When considering the social factors - there's a lot that the general
> public, the people who vote as distinct to other cohorts - are expected to
> forget - if standards are sought to be defined in this area, what would the
> characteristics of it be?  as the natural considerations about
> consciousness - don't appear to be treated very respectfully;  seems the
> focus is on social influences; a limitless volume of parallel universes
> computationally engineered to be applied upon people to live in; with, the
> opportunity, perhaps, for people to make their own artificial realms, that
> might be happier than those offered by others, for profit, power or
> immunity to the consequences of wrongs...  which is hard to consider being
> a new problem: https://www.youtube.com/watch?v=UkjyCPuTKPw
>
> in anycase;
>
> From 2016:
> https://docs.google.com/presentation/d/1RzczQPfygLuowu-WPvaYyKQB0PsSF2COKldj1mjktTs/edit?usp=sharing
>
> video is: https://www.youtube.com/watch?v=k61nJkx5aDQ
>
> Numenta https://github.com/numenta/htmresearch-old  has been doing - what
> I consider to be great work - in the area - re: sparsity.
>
> But I don't see how it can work in a dishonest environment; where, the
> spatio-temporal n-dimensional identifiers are aggregated and relabelled to
> IP harvesters.
>
> Bit more complex than the characterisation of problems associated with
> http-range-14 or cooluris...
>
> therefore; in consideration,
>
> Rather than 'digital twins' or similar; the functions that appear
> desirable is for 'artificial colleagues', where there's effectively -
> software defined 'robots' that have different functions; whether it be the
> community dj, or a researcher or a financial / administrative assistant,
> etc..   therein, the process being similar to HR, defining the
> characteristics of these 'colleagues' and their characteristics and
> qualities, access privileges, etc.
>
> older example is:
> https://docs.google.com/spreadsheets/d/1VixKXjZL31bZRXQS9J1FmvPyDzdkgE8B2-3fzPmRYNc/edit?usp=sharing
>
>
> but that list was produced to try to get people to think about the
> characteristics of their 'ai assistant' / ai agents that they're
> developing; whereas, more recently - i think...  telling an LLM to go into
> 'monty python' mode works.  similarly boston legal, or other examples that
> have a lot of information (far more than can easily be provided by a
> prompt) in the existing models...  and, perhaps also, more direct?  perhaps
> that's part of how the scenario response frameworks actually function..  as
> noted, earlier.
>
> but what's likely to happen; is that, the means to define personal
> assistants for VIPs / PEPs, etc.  will end-up requiring access to their
> diary, health information, etc..  but, perhaps then it'll be easier to
> understand the importance of broader ecosystems works that are define
> natural agents in terms broader than shared private / public keys, in a
> wallet...  idk...  also, the question of whose asset is - the asset rather
> than the principal?
>
> The commodification methodologies are highly evolved.  alternatives, not
> so much. not sure if there's much interest, indeed, seems as though there
> isn't really...  at least, not at the moment.
>
> The other aspect, was - in langchain like methods - to have another bot,
> like a supervisor bot, that checks the output of the bot process - as to
> instigate corrections, where required.
>
> So, overall,
>
> https://github.com/Mintplex-Labs/anything-llm
>
> https://medium.com/openlink-software-blog/introducing-the-openlink-personal-assistant-e74a76eb2bed
>
>
> https://community.openlinksw.com/t/llamaindex-based-retrieval-augmented-generation-rag-using-a-virtuoso-backend-via-sparql/4117
>
>
> and, I'll update the spreadsheet provided above with the other links I've
> got, but haven't put into a public resource somewhere yet...
>
> Yet, i hope to learn more about how these sorts of things fit into the
> generation of artificial realms, whether it be generating game like
> experiences - say, from a book or series of books; or, creating linear
> media, again, from a book or similar - but - i don't think a language
> taxonomy exists, across different 'large learning model' fields (llms) to
> standardise the command structures, in-effect..
>
> I think it's important to also consider how to ensure people are not
> defined by others without any ability to do anything about it, when the
> characterisation or purpose of any such definitions are wrong, whether, in
> association to STEM (ie physics or life-sciences) or morally, otherwise....
> and particularly, in a world where its assumed that people will be defined
> by some app associated to a phone device related identifier only.
>
> another idea, fwiw, in consideration of the 'social issues', was whether
> these LLMs should understand RDF & thereby also, decentralised namespaces,
> etc...  there's a variety of good technical reasons why this might benefit
> the technology stack - as well as, having potentially meaningfully positive
> attributes that could act to protect against various forms of potential
> disaster, by decentralising the namespace in ways json can't do.
>
> but idk.   There's alot missing from the stack required for what I
> intended to produce re: "human centric" (ai)..  so much stuff, that's just
> not free to do...
>
> i hope something in my ramblings helps.
>
> tim.
>
> On Thu, 11 Jul 2024 at 00:16, Dave Raggett <dsr@w3.org> wrote:
>
>> Unfortunately our current AI technology doesn’t support continual
>> learning, limiting large language models to the datasets they were trained
>> with. An LLM trained back in 2023 won’t know what’s happened in 2024, and
>> retraining is very expensive. There are work arounds, e.g. retrieval
>> augmented generation (RAG) where the LLM is prompted using information
>> retrieved from a database that matches the user’s request. However, this
>> mechanism has its limitations.
>>
>> For the next generation of AI we would like to support continual
>> learning, so that AI systems can remain up to date, and moreover, learn new
>> skills as needed for different applications through a process of
>> observation, instruction and experience. To better understand what’s needed
>> it is worth looking at the different kinds of human memory.
>>
>> Sensory memory is short lived, e.g. the phonological loop is limited to
>> about one to two seconds. This is what allows us to replay in our heads
>> what someone just said to us. Short term memory is said to be up to around
>> 30 seconds with limited capacity. Long term memory is indefinite in
>> duration and capacity. Humans are also good at learning from single
>> observations / episodes. How can all this be realised as artificial neural
>> networks?
>>
>> Generative AI relies on back propagation for gradient descent, but this
>> is slow as can be seen from the typical learning rate parameters. It
>> certainly won’t be effective for single shot learning.  Moreover it doesn’t
>> apply to sparse spiking neural networks which aren’t differentiable.
>> Alternative approaches use local learning rules, e.g. variations on Hebbian
>> learning where the synaptic weights are updated based upon correlations
>> between the neuron’s inputs and output.
>>
>> One approach to implementing a model of the phonological loop is as a
>> shared vector space where items from a given vocabulary are encoded with
>> their temporal position, which can also be used as a cue for recall.
>> Memory traces fade with time unless reinforced by replay. In essence, this
>> treats memory as a sum over traces where each trace is a circular
>> convolution of the item and its temporal position.  The vectors for
>> temporal positions should be orthogonal.  Trace retrieval will be noisy,
>> but that can be addressed through selecting the strongest matching
>> vocabulary item.  This could be considered in terms of vectors representing
>> a probability distribution over vocabulary items.
>>
>> A modified Hebbian learning rule can be used to update the synaptic
>> weights so that on each cycle, the updated weight on each cycle pays more
>> attention to the new information than to old information. Over successive
>> cycles, old traces become weaker and harder to recall, unless boosted by
>> replay. This requires a means to generate an orthogonal sequence of
>> temporal position vectors. The sequence would repeat at an interval much
>> longer than the duration of the phonological loop.
>>
>> The next challenge is to generalise this to short and long term memory
>> stores. A key difference to the phonological loop is that we can remember
>> many sequences. This implies a combination of context and temporal
>> sequence.  Transferring a sequence from sensory memory (the phonological
>> loop) to short and long term memory will involve re-encoding memory traces
>> with the context and a local time sequence.
>>
>> This leaves many questions. What determines the context?  How can
>> memories be recalled? How are sequences bounded? How can sequences be
>> compressed in terms of sub-sequences?  How can sequences be generalised to
>> support language processing?  How does this relate more generally to
>> episodic memory as the memory of everyday events?
>>
>> I now hope to get a concrete feel for some of these challenges, starting
>> with implementing a simple model of the phonological loop. If anyone wants
>> to help please get in touch. I am hoping to develop this as a web-based
>> demo that runs in the browser.
>>
>> Best regards,
>>
>> Dave Raggett <dsr@w3.org>
>>
>>
>>
>>

Received on Thursday, 11 July 2024 15:15:09 UTC