- From: Owen Ambur <Owen.Ambur@verizon.net>
- Date: Mon, 25 Jan 2021 11:03:57 -0500
- To: W3C AIKR CG <public-aikr@w3.org>
"In the quest to capture ... social intelligence in machines, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Department of Brain and Cognitive Sciences created an algorithm capable of inferring goals and plans, even when those plans might fail." "... ability to account for mistakes could be crucial for building machines that robustly infer and act in our interests ... Otherwise, AI systems might wrongly infer that, since we failed to achieve our higher-order goals, those goals weren’t desired after all. We’ve seen what happens when algorithms feed on our reflexive and unplanned usage of social media, leading us down paths of dependency and polarization. Ideally, the algorithms of the future will recognize our mistakes, bad habits, and irrationalities and help us avoid, rather than reinforce, them." https://scitechdaily.com/new-mit-social-intelligence-algorithm-helps-build-machines-that-better-understand-human-goals/ Wouldn't it be nice if AI-assisted business networking services helped us avoid polarization and needless dependencies on The Politics Industry as we strive to achieve public objectives documented in an open, standard, machine-readable format? https://www.linkedin.com/pulse/politics-industry-v-we-people-magic-formula-owen-ambur/ Owen
Received on Monday, 25 January 2021 16:04:15 UTC