Re: AI for Understanding Human Goals

Yes, Paola, it would be great to see what AI/ML algorithms might be able 
to do with the existing StratML collection, which now comprises >5K 
files ... but even more so if and hopefully when public agencies start 
publishing their /performance reports/ in open, standard, 
machine-readable format... as U.S. federal agencies are ostensibly 
required by law to do.

I'm always on the lookout for partners who might be willing and able to 
begin to demonstrate such capabilities.

While the initial benefit of enabling taxpayers to see what they are 
getting for their money will be great, imagine how AI agents can help 
agencies learn from failure and thus improve their performance over time.

It is painful to watch agency leaders continue failing to capitalize on 
that potential.

Indeed, recent direction from the Trump administration's OMB director on 
the way out the door 
<https://www.linkedin.com/feed/update/urn:li:ugcPost:6701562085794492416?commentUrn=urn%3Ali%3Acomment%3A%28ugcPost%3A6701562085794492416%2C6757844681394032640%29> 
goes so far as to imply that agency leaders have no accountability for 
most of the objectives with which they are entrusted, as if those 
objectives are merely jokes being played on taxpayers.  Unfortunately, 
all that seems to matter is what suits The Politics Industry.  The 
question is how long voters and taxpayers will put up with such 
behavior.  Hopefully, not indefinitely.

Owen


On 1/25/2021 7:19 PM, Paola Di Maio wrote:
> Thank you Owen
> wouldn't it be great to try the algorithm on some stratml resources
>
>
> On Tue, Jan 26, 2021 at 12:04 AM Owen Ambur <Owen.Ambur@verizon.net 
> <mailto:Owen.Ambur@verizon.net>> wrote:
>
>     "In the quest to capture ... social intelligence in machines,
>     researchers from MIT’s Computer Science and Artificial Intelligence
>     Laboratory (CSAIL) and the Department of Brain and Cognitive Sciences
>     created an algorithm capable of inferring goals and plans, even when
>     those plans might fail."
>
>     "... ability to account for mistakes could be crucial for building
>     machines that robustly infer and act in our interests ...
>     Otherwise, AI
>     systems might wrongly infer that, since we failed to achieve our
>     higher-order goals, those goals weren’t desired after all. We’ve seen
>     what happens when algorithms feed on our reflexive and unplanned
>     usage
>     of social media, leading us down paths of dependency and
>     polarization.
>     Ideally, the algorithms of the future will recognize our mistakes,
>     bad
>     habits, and irrationalities and help us avoid, rather than
>     reinforce, them."
>
>     https://scitechdaily.com/new-mit-social-intelligence-algorithm-helps-build-machines-that-better-understand-human-goals/
>     <https://scitechdaily.com/new-mit-social-intelligence-algorithm-helps-build-machines-that-better-understand-human-goals/>
>
>     Wouldn't it be nice if AI-assisted business networking services
>     helped
>     us avoid polarization and needless dependencies on The Politics
>     Industry
>     as we strive to achieve public objectives documented in an open,
>     standard, machine-readable format?
>
>     https://www.linkedin.com/pulse/politics-industry-v-we-people-magic-formula-owen-ambur/
>     <https://www.linkedin.com/pulse/politics-industry-v-we-people-magic-formula-owen-ambur/>
>
>     Owen
>
>
>

Received on Tuesday, 26 January 2021 02:12:30 UTC