- From: Owen Ambur <owen.ambur@verizon.net>
- Date: Sat, 6 Jul 2019 19:45:59 +0000 (UTC)
- To: dsr@w3.org
- Cc: public-aikr@w3.org
- Message-ID: <1988914964.2392303.1562442359636@mail.yahoo.com>
Hey, Dave, I see nothing with which to disagree in your response. Here's how I'm trying to reduce the needless cost of #gofapu while improving the practice of self-governance: https://www.linkedin.com/pulse/transforming-governance-reducing-cost-gofpau-owen-ambur/ See also StratML use cases: Goal 7: Individuals - Publish on the Web in open, standard, machine-readable format the plans of individuals. Goal 8: Political Parties - Publish political party platforms on the Web in open, standard, machine-readable format.Goal 9: Candidates for Elective Office - Publish the issue statements of candidates for elective office as performance plans on the Web in open, standard, machine-readable format.Goal 10: Elected Representatives - Upon election, flesh out the candidates' plans to document more explicit stakeholder roles and performance indicators for their performance in office. I'll look forward to learning about your plan ... and perhaps rendering it in open, standard, machine-readable StratML format. Owen Amburhttps://www.linkedin.com/in/owenambur/ -----Original Message----- From: Dave Raggett <dsr@w3.org> To: Owen Ambur <owen.ambur@verizon.net> Cc: public-aikr <public-aikr@w3.org> Sent: Sat, Jul 6, 2019 3:30 pm Subject: Re: disentangled representation? Hmm, a simpler interpretation is that feelings and emotions are computations that guide our behaviour in respect to our goals and our social interactions with others. Some of this further relates to fast vs slow modes of thinking as popularised by Daniel Kahneman: "System 1 and System 2 are two distinct modes of decision making: System 1 is an automatic, fast and often unconscious way of thinking. It is autonomous and efficient, requiring little energy or attention, but is prone to biases and systematic errors. System 2 is an effortful, slow and controlled way of thinking." This is all too evident in how people think about politics, and for me, suggests that as we work on developing strong AI, we need to ensure that AI systems have feelings along with empathy and compassion, and avoid the lazy ways of thinking that far too many humans use in respect to politics and society. If anyone is actually interested in working on the practical aspects of this, please contact me directly. On 6 Jul 2019, at 18:50, Owen Ambur <owen.ambur@verizon.net> wrote: In Incognito: The Secret Lives of the Brain, David Eagleman downplays the role of consciousness in determining our behavior, most of which is on autopilot. https://www.linkedin.com/pulse/consciously-connected-communities-owen-ambur/ In Against Empathy: The Case for RationalCompassion, Paul Bloom says, "Whensome people think about empathy, they think about kindness. I think about war." (p. 188) While the math eludes me, the broader logic seems clear: Do we want to use our powers of reasoning merely to justify our emotions, after-the-fact, as seems to be natural for us? And should we use AI to augment (accentuate) the expression of our emotions ... as "social" networking services tend to do? (It seem like mind altering drugs might be more efficiently and effectively applied for that purpose.) Or might we prefer to apply logic (math) to improve the outcomes of our actions? Which of those two alternatives might make us "feel" better (be more satisfied) in the long run? Dave Raggett <dsr@w3.org> http://www.w3.org/People/RaggettW3C Data Activity Lead & W3C champion for the Web of things
Received on Saturday, 6 July 2019 19:46:27 UTC