Re: disentangled representation?

Hi Owen,

We seem to have very different goals. I want to find people interested in discussing the technical research challenges, e.g. to identify representative use cases that illustrate the requirements for computational models, and the corresponding implications for declarative and procedural knowledge.

> On 6 Jul 2019, at 20:45, Owen Ambur <owen.ambur@verizon.net> wrote:
> 
> Hey, Dave, I see nothing with which to disagree in your response.  
> 
> Here's how I'm trying to reduce the needless cost of #gofapu while improving the practice of self-governance: https://www.linkedin.com/pulse/transforming-governance-reducing-cost-gofpau-owen-ambur/ <https://www.linkedin.com/pulse/transforming-governance-reducing-cost-gofpau-owen-ambur/> 
> 
> See also StratML use cases:
> 
> Goal 7:  <http://stratml.us/carmel/iso/UC4SwStyle.xml#_0fc1dbae-08a5-11e6-b06f-a2fa45c7ae33>Individuals - Publish on the Web in open, standard, machine-readable format the plans of individuals.
> Goal 8:  <http://stratml.us/carmel/iso/UC4SwStyle.xml#_0fc1db9a-08a5-11e6-b06f-a2fa45c7ae33>Political Parties - Publish political party platforms on the Web in open, standard, machine-readable format.
> Goal 9:  <http://stratml.us/carmel/iso/UC4SwStyle.xml#_0fc1db9c-08a5-11e6-b06f-a2fa45c7ae33>Candidates for Elective Office - Publish the issue statements of candidates for elective office as performance plans on the Web in open, standard, machine-readable format.
> Goal 10:  <http://stratml.us/carmel/iso/UC4SwStyle.xml#_654e441c-0969-11e6-97e7-059645c7ae33>Elected Representatives - Upon election, flesh out the candidates' plans to document more explicit stakeholder roles and performance indicators for their performance in office.
> 
> I'll look forward to learning about your plan ... and perhaps rendering it in open, standard, machine-readable StratML format.
> 
> Owen Ambur
> https://www.linkedin.com/in/owenambur/ <https://www.linkedin.com/in/owenambur/>
> 
> 
> -----Original Message-----
> From: Dave Raggett <dsr@w3.org>
> To: Owen Ambur <owen.ambur@verizon.net>
> Cc: public-aikr <public-aikr@w3.org>
> Sent: Sat, Jul 6, 2019 3:30 pm
> Subject: Re: disentangled representation?
> 
> Hmm, a simpler interpretation is that feelings and emotions are computations that guide our behaviour in respect to our goals and our social interactions with others. Some of this further relates to fast vs slow modes of thinking as popularised by Daniel Kahneman:
> 
> "System 1 and System 2 are two distinct modes of decision making: System 1 is an automatic, fast and often unconscious way of thinking. It is autonomous and efficient, requiring little energy or attention, but is prone to biases and systematic errors. System 2 is an effortful, slow and controlled way of thinking."
> 
> This is all too evident in how people think about politics, and for me, suggests that as we work on developing strong AI, we need to ensure that AI systems have feelings along with empathy and compassion, and avoid the lazy ways of thinking that far too many humans use in respect to politics and society.
> 
> If anyone is actually interested in working on the practical aspects of this, please contact me directly.
> 
>> On 6 Jul 2019, at 18:50, Owen Ambur <owen.ambur@verizon.net <mailto:owen.ambur@verizon.net>> wrote:
>> 
>> In Incognito: The Secret Lives of the Brain <http://www.eagleman.com/incognito>, David Eagleman downplays the role of consciousness in determining our behavior, most of which is on autopilot.  https://www.linkedin.com/pulse/consciously-connected-communities-owen-ambur/ <https://www.linkedin.com/pulse/consciously-connected-communities-owen-ambur/> 
>> 
>> In Against Empathy: The Case for Rational Compassion, Paul Bloom says, "When some people think about empathy, they think about kindness.  I think about war." (p. 188)
>> 
>> While the math eludes me, the broader logic seems clear:
>> 
>> Do we want to use our powers of reasoning merely to justify our emotions, after-the-fact, as seems to be natural for us?  And should we use AI to augment (accentuate) the expression of our emotions ... as "social" networking services tend to do?  (It seem like mind altering drugs might be more efficiently and effectively applied for that purpose.)  
>> 
>> Or might we prefer to apply logic (math) to improve the outcomes of our actions?  
>> 
>> Which of those two alternatives might make us "feel" better (be more satisfied) in the long run?
> 
> Dave Raggett <dsr@w3.org <mailto:dsr@w3.org>> http://www.w3.org/People/Raggett <http://www.w3.org/People/Raggett>
> W3C Data Activity Lead & W3C champion for the Web of things 
> 
> 
> 
> 
> 
> 
> 

Dave Raggett <dsr@w3.org> http://www.w3.org/People/Raggett
W3C Data Activity Lead & W3C champion for the Web of things 

Received on Sunday, 7 July 2019 08:50:53 UTC