- From: Timothy Holborn <timothy.holborn@gmail.com>
- Date: Fri, 5 Nov 2021 03:39:34 +1000
- To: Dave Raggett <dsr@w3.org>
- Cc: public-cogai <public-cogai@w3.org>
- Message-ID: <CAM1Sok0eEjM0OhOngZiXaBY3XQTDtHVmsWTw82C8iJSkpiWDXg@mail.gmail.com>
The most amazing thing about time, is that it collapses all the different dreams we have about potential opportunities, into an interwoven stream we call reality. #WaveFunction #Causality It's kinda important to reduce distortion of "the wave function", due to external computational actors in bad ways. Right to self determination and "common sense" are amongst the issues at play. IMO there's design issues that produce different sorts of outcomes / qualities, about how different types of issues in the natural world (through the use of our tools) can be addressed. But I do wonder how W3C is equipped to do so? Some of the implications are broader than simply desiring royalty free standards. Freedom of thought, has more needs from an ethics point of view. Thereafter are also considerations around temporal support, as to support investigation of causality maps or knowledge clouds; as does in turn link to some of my older considerations about non-http based sources as is kinda considered by did work, although I fear the direction of that work has gone in recently. Does there need to be some consideration about whether designs are intended to support / enshrine support for human agency? How is computational stewardship intended to be organised? Are there different optimal models? Are there dangerous models? I wrote this in 2015, maybe it's helpful? What is the Definition of an 'AI Weapon' https://www.linkedin.com/pulse/definition-ai-weapon-timothy-holborn There still doesn't appear to have been enough structured work in that area. How do we ensure we're not sacrificing time, as to contribute towards something that may end up having impacts on the world that are the opposite of our good intentions? Timothy Holborn. On Wed, 3 Nov 2021, 9:41 pm Dave Raggett, <dsr@w3.org> wrote: > I am interested in your comments on the following overview article on deep > learning for AI by Yoshua Bengio, Yann Lecun, Geoffrey Hinton, > Communications of the ACM, July 2021, Vol. 64 No. 7. > > Comparing human learning abilities with current AI suggests several > directions for improvement: > > • Supervised learning requires too much labeled data and > model-free reinforcement learning requires far too many trials. Humans seem > to be able to generalize well with far less experience. > • Current systems are not as robust to changes in distribution as humans, > who can quickly adapt to such changes with very few examples. > • Current deep learning is most successful at perception tasks and > generally what are called system 1 tasks. Using deep learning for system 2 > tasks that require a deliberate sequence of steps is an exciting area that > is still in its infancy. > > > https://cacm.acm.org/magazines/2021/7/253464-deep-learning-for-ai/fulltext > > > It provides an insider’s perspective on progress and trends, but doesn’t > say much about the flaws as seen by outsiders, nor about ethical challenges > such as dealing with bias and explainability. > > It also fails to cite existing work on combining symbolic and sub-symbolic > approaches, including work on System 2, e.g. ACT-R. In my opinion, there is > a lot of potential for relating symbolic representations to vector > representations, and that this could provide valuable insights for richer > neural network architectures, especially in respect to System 2. > > Some points that caught my eye: > > How can we design future machine learning systems with the ability to > generalize better or adapt faster to out-of-distribution data? > > Evidence from neuroscience suggests that groups of nearby neurons (forming > what is called a hyper-column) are tightly connected and might represent a > kind of higher-level vector-valued unit able to send not just a scalar > quantity but rather a set of coordinated values. > > Most neural nets only have two timescales: the weights adapt slowly over > many examples and the activities adapt rapidly changing with each new > input. Adding an overlay of rapidly adapting and rapidly, decaying "fast > weights" introduces interesting new computational abilities. … Multiple > time scales of adaption also arise in learning to learn, or meta-learning. > > When thinking about a new challenge, such as driving in a city with > unusual traffic rules, or even imagining driving a vehicle on the moon, we > can take advantage of pieces of knowledge and generic skills we have > already mastered and recombine them dynamically in new ways. This form of > systematic generalization allows humans to generalize fairly well in > contexts that are very unlikely under their training distribution. We can > then further improve with practice, fine-tuning and compiling these new > skills so they do not need conscious attention anymore. How could we endow > neural networks with the ability to adapt quickly to new settings by > mostly reusing already known pieces of knowledge, thus avoiding > interference with known skills? > > The ability of young children to perform causal discovery suggests this > may be a basic property of the human brain, and recent work suggests that > optimizing out-of-distribution generalization under interventional changes > can be used to train neural networks to discover causal dependencies or > causal variables. How should we structure and train neural nets so they can > capture these underlying causal properties of the world? > > > What do you think? > > Dave Raggett <dsr@w3.org> http://www.w3.org/People/Raggett > W3C Data Activity Lead & W3C champion for the Web of things > > > > >
Received on Thursday, 4 November 2021 17:39:57 UTC