Re: Turtle & StratML

Dave, I finally found time to give some thought to what I might be able to do with the information provided in your message.  
You can see the results at http://stratml.us/drybridge/index.htm#RHDD or, more specifically, http://stratml.us/carmel/iso/RHDDwStyle.xml
I'd love to see what Drools might be able to do with StratML.

With respect to Objective 4.1: Goals - Model business goals by describing the steps to achieve them, the StratML value chain seems most applicable: http://stratml.us/references/oxygen/PerformancePlanOrReport20160216_xsd.htm#ValueChainStageType 
Owen Amburhttps://www.linkedin.com/in/owenambur/


-----Original Message-----
From: Dave Raggett <dsr@w3.org>
To: Owen Ambur <ambur@verizon.net>
Cc: public-aikr <public-aikr@w3.org>; jeanpa <jeanpa@docugami.com>; alan <alan@docugami.com>
Sent: Sat, Jul 20, 2019 9:41 am
Subject: Re: Turtle & StratML

Hi Owen,

On 19 Jul 2019, at 05:12, Owen Ambur <ambur@verizon.net> wrote:


I'm intrigued by your observation that "rules are goal directed and actions can declare sub-goals" (what StratML calls "objectives") but I must confess that I don't have a clue as to what that means.  Can you show me?

My own work on rules focuses on forward chained condition-action rules where the conditions are evaluated against the current state of the graph and if there is a match, the actions are executed. If multiple rules match the current state, then a conflict resolution mechanism is needed to select which rule to execute. This approach is widely used in industry to implement business logic, see e.g. Drools.
An important question is how to pass information from one rule to the next, and how to organise rules into sets for solving more complex problems.  Goals provide a solution where each rule is designed to fulfil a given goal. The goal is described as a chunk with the goal's name and additional named property values.  Rule actions can then add sub goals to guide further work with subsequent rules.  The rule engine can keep track of which goals are in progress or are waiting to be started, etc. The graph is thus used as a blackboard for different rule sets to work on different aspects of a problem specification.  Blackboard systems are a well known approach for AI, e.g. Wikipedia says:

A blackboard system is an artificial intelligence approach based on the blackboard architectural model, where a common knowledge base, the "blackboard", is iteratively updated by a diverse group of specialist knowledge sources, starting with a problem specification and ending with a solution. Each knowledge source updates the blackboard with a partial solution when its internal constraints match the blackboard state. In this way, the specialists work together to solve the problem. The blackboard model was originally designed as a way to handle complex, ill-defined problems, where the solution is the sum of its parts.

An explicit treatment of goals lends itself to the use of heuristics to propose new rules, and reinforcement learning to  rank which rules are effective for any given goal. This relates to ideas for combining graphs with statistical approaches, including stochastic memory queries rather than deterministic queries. Machine learning can provide the means to learn graphs and rules that reflect prior knowledge and past experience. This requires a treatment of statistics as pure logic alone won’t suffice.

-----Original Message-----
From: Dave Raggett <dsr@w3.org>
To: Owen Ambur <owen.ambur@verizon.net>
Cc: public-aikr <public-aikr@w3.org>
Sent: Sun, Jul 14, 2019 7:04 am
Subject: Re: Turtle & StratML

Hi Owen,
The rule language uses graph traversal for conditions and graph mutation for actions. Rules are goal directed and actions can declare sub-goals along with their relationship to the parent goal. Using graphs for data, rules and goals gives a great deal of flexibility. The graph model is very close to Olaf Hartig’s RDF*. Graphs are layered on top of an object model that supports link annotations and differentiates links as chunk properties from links as relationships between chunks. This is essentially an amalgam of RDF and Property Graphs with inspiration from Psychology.
According to wikipedia: 

A chunk is a collection of basic familiar units that have been grouped together and stored in a person's memory. These chunks are able to be retrieved more easily due to their coherent familiarity. It is believed that people create higher order cognitive representations of the items within the chunk. The items are more easily remembered as a group than as the individual items themselves.

Best regards,    Dave


On 13 Jul 2019, at 16:37, Owen Ambur <owen.ambur@verizon.net> wrote:
Dave, your turtle/diagram optional display feature is pretty cool, at https://www.w3.org/WoT/demos/shrl/test.html.  
Seems like it might be worth documenting the in AIKR CG's performance report but I'm not sure either how I might make use of the capability you have provided or how to report it as an indicator of the CG's performance.
I look forward to learning.  I'd be especially interested to see how your capability might be applied to the StratML vocabulary and schemas:  http://stratml.us/#Part1 | http://stratml.us/#Part2 | http://stratml.us/#Part3 (Note: The latter has not been updated to harmonize with some relatively minor changes made to Parts 1 & 2 in the ANSI and ISO processes.)

Owen Amburhttps://www.linkedin.com/in/owenambur/


-----Original Message-----
From: Dave Raggett <dsr@w3.org>
To: Owen Ambur <ambur@verizon.net>
Sent: Thu, Jul 11, 2019 4:37 am
Subject: Re: disentangled representation?

I should have added that SHRL was an exploration I did several years ago on RDF shape constraints based upon ATNs. It makes use of JavaScript libraries including a model for GraphViz for the visualisation as graph diagrams.  Graph shapes are promising for rule conditions. Rule actions either update the graphs or invoke external actions, e.g. to display query results.


On 11 Jul 2019, at 09:21, Dave Raggett <dsr@w3.org> wrote:

Hi Owen,
I plan to provide a Web demonstrator along with GitHub documentation and issue tracker.  The work would occur in phases: the first phase is to explore the design space for condition-action rules operating over a generalised form of graphs. The next phase is to integrate the stochastic memory retrieval model from ACT-R, and to explore mechanisms for reinforcement learning of rule sets. The third phase applies this to machine learning of ontologies, and to demonstrate a broad range of different kinds of reasoning.  In parallel, with all of these phases, there is a need to gather representative use cases. Rules will be modelled as graphs, as a basis for compiling declarative knowledge to procedural knowledge. You can get a feel for this from:
 https://www.w3.org/WoT/demos/shrl/test.html
The rightmost drop down allows you to switch between text based and diagram based representations.
Best regards,    Dave


On 11 Jul 2019, at 02:50, Owen Ambur <ambur@verizon.net> wrote:
Dave, if your plan comes together, I'd like to render it in StratML format.

Owen Amburhttps://www.linkedin.com/in/owenambur/


-----Original Message-----
From: Dave Raggett <dsr@w3.org>
To: Owen Ambur <owen.ambur@verizon.net>
Cc: public-aikr <public-aikr@w3.org>
Sent: Sun, Jul 7, 2019 4:51 am
Subject: Re: disentangled representation?

Hi Owen,
We seem to have very different goals. I want to find people interested in discussing the technical research challenges, e.g. to identify representative use cases that illustrate the requirements for computational models, and the corresponding implications for declarative and procedural knowledge.


On 6 Jul 2019, at 20:45, Owen Ambur <owen.ambur@verizon.net> wrote:
Hey, Dave, I see nothing with which to disagree in your response.  
Here's how I'm trying to reduce the needless cost of #gofapu while improving the practice of self-governance: https://www.linkedin.com/pulse/transforming-governance-reducing-cost-gofpau-owen-ambur/ 
See also StratML use cases:




Goal 7: Individuals - Publish on the Web in open, standard, machine-readable format the plans of individuals.

Goal 8: Political Parties - Publish political party platforms on the Web in open, standard, machine-readable format.Goal 9: Candidates for Elective Office - Publish the issue statements of candidates for elective office as performance plans on the Web in open, standard, machine-readable format.Goal 10: Elected Representatives - Upon election, flesh out the candidates' plans to document more explicit stakeholder roles and performance indicators for their performance in office.

I'll look forward to learning about your plan ... and perhaps rendering it in open, standard, machine-readable StratML format.
Owen Amburhttps://www.linkedin.com/in/owenambur/


-----Original Message-----
From: Dave Raggett <dsr@w3.org>
To: Owen Ambur <owen.ambur@verizon.net>
Cc: public-aikr <public-aikr@w3.org>
Sent: Sat, Jul 6, 2019 3:30 pm
Subject: Re: disentangled representation?

Hmm, a simpler interpretation is that feelings and emotions are computations that guide our behaviour in respect to our goals and our social interactions with others. Some of this further relates to fast vs slow modes of thinking as popularised by Daniel Kahneman:
"System 1 and System 2 are two distinct modes of decision making: System 1 is an automatic, fast and often unconscious way of thinking. It is autonomous and efficient, requiring little energy or attention, but is prone to biases and systematic errors. System 2 is an effortful, slow and controlled way of thinking."
This is all too evident in how people think about politics, and for me, suggests that as we work on developing strong AI, we need to ensure that AI systems have feelings along with empathy and compassion, and avoid the lazy ways of thinking that far too many humans use in respect to politics and society.
If anyone is actually interested in working on the practical aspects of this, please contact me directly.


On 6 Jul 2019, at 18:50, Owen Ambur <owen.ambur@verizon.net> wrote:
In Incognito: The Secret Lives of the Brain, David Eagleman downplays the role of consciousness in determining our behavior, most of which is on autopilot.  https://www.linkedin.com/pulse/consciously-connected-communities-owen-ambur/ 
In Against Empathy: The Case for RationalCompassion, Paul Bloom says, "Whensome people think about empathy, they think about kindness.  I think about war." (p. 188)
While the math eludes me, the broader logic seems clear:

Do we want to use our powers of reasoning merely to justify our emotions, after-the-fact, as seems to be natural for us?  And should we use AI to augment (accentuate) the expression of our emotions ... as "social" networking services tend to do?  (It seem like mind altering drugs might be more efficiently and effectively applied for that purpose.)  
Or might we prefer to apply logic (math) to improve the outcomes of our actions?  

Which of those two alternatives might make us "feel" better (be more satisfied) in the long run?

Dave Raggett <dsr@w3.org> http://www.w3.org/People/RaggettW3C Data Activity Lead & W3C champion for the Web of things 








Dave Raggett <dsr@w3.org> http://www.w3.org/People/RaggettW3C Data Activity Lead & W3C champion for the Web of things 







Dave Raggett <dsr@w3.org> http://www.w3.org/People/RaggettW3C Data Activity Lead & W3C champion for the Web of things 






Dave Raggett <dsr@w3.org> http://www.w3.org/People/RaggettW3C Data Activity Lead & W3C champion for the Web of things 







Dave Raggett <dsr@w3.org> http://www.w3.org/People/RaggettW3C Data Activity Lead & W3C champion for the Web of things 







Dave Raggett <dsr@w3.org> http://www.w3.org/People/RaggettW3C Data Activity Lead & W3C champion for the Web of things 

Received on Wednesday, 14 August 2019 21:43:50 UTC