Re: [StratNavApp] Recent changes in your projects

The W3 AI-KR CG being what it is, a community group under the umbrella of W3, I recommend we try to find a common denominator definition which covers the UN/UNESCO, EU and US frameworks for (the ethical use of) AI.
In all the discussions I have seen so far the underlying tacit assumption is that all AI is DIRECTLY subject to human control dictated by an AI Strategist.
I find this a very limitative definition, and it is contradicted by current practices. In the financial markets AI applications are used that e.g. do NOT share their data and insights with human operators, because we as humans DO NOT HAVE the mental capacity to grasp the modeling and number crunching being applied.
It is PRECISELY in complex adaptive systems, systems of complex adaptive systems and sets of systems of complex adaptive systems that we encounter this scenario.
Most human technical infrastructures fall in this category.
As I indicated for e.g. my work on a smart city framework for disease control, there are levels of complexity where we must focus on the interaction processes, but because there are so many components involved, all interacting, that we must program our AI to achieve certain objectives, and have them autonomously decide how to achieve this.
In such a scenario we have to define boundaries and ethical use restrictions, much like the Three Laws of Robotics popularized by the SF writer Asimov.
Which translates into two distinct strategists being required for the use of AI, the AI Technical Strategist and the AI Ethical Strategist.
Three articles, all fro The Guardian:Rise of the machines: has technology evolved beyond our control?

| 
| 
| 
|  |  |

 |

 |
| 
|  | 
Rise of the machines: has technology evolved beyond our control?

James Bridle

Technology is starting to behave in intelligent and unpredictable ways that even its creators don’t understand. ...
 |

 |

 |


Can we stop AI outsmarting humanity?

| 
| 
| 
|  |  |

 |

 |
| 
|  | 
Can we stop AI outsmarting humanity?

Mara Hvistendahl

The long read: The spectre of superintelligent machines doing us harm is not just science fiction, technologists...
 |

 |

 |


For all its sophistication, AI isn't fit to make life-or-death decisions | Kenan Malik

| 
| 
| 
|  |  |

 |

 |
| 
|  | 
For all its sophistication, AI isn't fit to make life-or-death decisions...

Kenan Malik


 |

 |

 |


The role of the AI Ethical Strategist or Ethical Strategist is far more important than the AI Technical Strategist. What COVID-19 has made abundantly clear is that particularly in times of crisis ethical dilemmas that medical and other professionals may face, e.g in triage situations, the decision-making CANNOT be left to machines. Even in modern warfare, which is very much technology driven, and where military protocols are in place, situations may still arise where a human operator must take a decision, weighing the options, and consequences.
If we want to get it right we must talk about both types of AI Strategists, and the corresponding two categories of AI.


Milton Ponson
GSM: +297 747 8280
PO Box 1154, Oranjestad
Aruba, Dutch Caribbean
Project Paradigm: Bringing the ICT tools for sustainable development to all stakeholders worldwide through collaborative research on applied mathematics, advanced modeling, software and standards development 

    On Thursday, May 28, 2020, 10:18:19 AM ADT, carl mattocks <carlmattocks@gmail.com> wrote:  
 
 
Regarding the Role of an AI Strategist  I suggest we adopt as-is the GOALS from  https://www.stratnavapp.com/StratML/Part1/d95875f4-04b2-46d3-a953-c441e16f428a/Styled
Goal: Ethics
Navigate through the potential ethical and legal issues of AI technology, while driving forward the execution of smarter, more intelligent products and services and processes
Goal: Strategy
develop and drive corporate strategy to lead assessments of new product ideas, develop business cases and lead prototype creation to reach formal business recommendation on new opportunities geared at enhancing service and member experience
Goal: Problems
Articulate in solving a business problem how much will be done by an AI system and how much by the human system

AND add them to Strategy (previously discussed) https://www.stratnavapp.com/StratML/Part2/861566c8-e9be-4642-b52f-f673fa499f4e/Styled

Vision
For all AI systems to have clearly and transparently documented goals and performance data showing that they are being achieved.
Mission
The mission of an AI Strategist is to define the purpose and goals of AI systems, as well as the KPIs by which we can determine if the system is meeting its goals.
Goal: Ethical
Ensure AI Systems adhere to pivotal principles, such as, confidentiality, autonomy, accountability and veracity
Goal: Machine Learning Evaluation
Evaluate machine learning models
Stakeholders:

Artificial Intelligence Knowledge Representation Community Group (AIKR CG)

Role: Community of Interest



Objectives:

Objective: Trustworthy
Provide the foundation for a trustworthy AIKR
Other Information: Evaluation metrics are tied to machine learning tasks. Perhaps the easiest metric to interpret is the percent of estimates that differ from the true value by no more than X%.

Objective: Track
Track Classification Performance Indicators
Other Information: Ontological Statement: Classification Accuracy is the ratio of number of correct class label predictions to the total number of input samples data. Ontological Statement: F1 Score measure the Harmonic Mean between precision and recall. The range for F1 Score is [0, 1]. It tells you how precise your classifier is (how many instances it classifies correctly), as well as how robust it is (it does not miss a significant number of instances).

Performance Indicator: Precision Recall
Quantitative in Outcome
Other Information: Ontological Statement: Precision is the number of correct positive results divided by the number of positive results predicted by the classifier.Ontological Statement: Recall is the number of correct positive results divided by the number of all relevant samples (all samples that should have been identified as positive).

Performance Indicator: Accuracy
Quantitative in Outcome
Other Information: Ontological Statement: Classification Rate or Accuracy is given by the relation: True Positives + True Negatives / All Instances (True & False Positives + True & False Negatives)

Performance Indicator: Confusion Matrix
Quantitative in Outcome
Other Information: Ontological Statement: A confusion matrix is a summary of prediction results on a classification problem. The number of correct and incorrect predictions are summarized with count values and broken down by each class (the types of errors being made)Types : * True Positives : The cases in which we predicted YES and the actual output was also YES. * True Negatives : The cases in which we predicted NO and the actual output was NO. * False Positives : The cases in which we predicted YES and the actual output was NO. * False Negatives : The cases in which we predicted NO and the actual output was YES.Accuracy for the matrix can be calculated by taking average of the values lying across the “main diagonal”TypeStartDateEndDateDescriptionTargetNumber of True PositivesTargetNumber of False PositivesTargetNumber of True NegativesTargetNumber of False NegativesActual[To be determined]

Performance Indicator: Per-class accuracy
Quantitative in Outcome
Performance Indicator: Log-Loss
Quantitative in Outcome
Other Information: Ontological Statement: Logarithmic loss (related to cross-entropy) measures the performance of a classification model where the prediction input is a probability value between 0 and 1 - Log loss increases as the predicted probability diverges from the actual labelLogarithmic Loss or Log Loss, works by penalising the false classifications. It works well for multi-class classification. When working with Log Loss, the classifier must assign probability to each class for all the samples. where,y_ij, indicates whether sample i belongs to class j or notp_ij, indicates the probability of sample i belonging to class jLog Loss has no upper bound and it exists on the range [0, ∞). Log Loss nearer to 0 indicates higher accuracy, whereas if the Log Loss is away from 0 then it indicates lower accuracy.In general, minimising Log Loss gives greater accuracy for the classifier.

Performance Indicator: AUC-ROC Curve
Quantitative in Outcome
Other Information: Ontological Statement: check performance of multi - class classification AUROC (Area Under the Receiver Operating Characteristics) curve.Ontological Statement: Area Under Curve(AUC) is one of the most widely used metrics for evaluation. It is used for binary classification problem. AUC of a classifier is equal to the probability that the classifier will rank a randomly chosen positive example higher than a randomly chosen negative example. True Positive Rate (Sensitivity) : True Positive Rate is defined as TP/ (FN+TP). True Positive Rate corresponds to the proportion of positive data points that are correctly considered as positive, with respect to all positive data points.False Positive Rate (Specificity) : False Positive Rate is defined as FP / (FP+TN). False Positive Rate corresponds to the proportion of negative data points that are mistakenly considered as positive, with respect to all negative data points.

Performance Indicator: F-measure
Quantitative in Outcome
Other Information: F1 Score is the Harmonic Mean between precision and recall.Ontological Statement: F-measure represents both Precision and Recall it helps to have a measurement that represents both of them. F-measure is calculated using Harmonic Mean (in place of Arithmetic Mean).Ontological Statement:  Mean Absolute Error is the average of the difference between the Original Values and the Predicted Values. It gives us the measure of how far the predictions were from the actual output.Ontological Statement:  Mean Squared Error(MSE) takes the average of the square of the difference between the original values and the predicted values.

Performance Indicator: NDCG
Quantitative in Outcome
Other Information: Ontological Statement: Normalized discounted cumulative gain (DCG) is a measure of ranking quality. In information retrieval, DCG measures the usefulness, or gain, of a document based on its position in the result list.

Performance Indicator: Regression Analysis
Quantitative in Outcome
Other Information: Root Mean Square Error (RMSE)Ontological Statement: Root Mean Square Error (RMSE) is the standard deviation of the residuals (prediction errors). Residuals are a measure of how far from the regression line data points are; RMSE is a measure of how spread out these residuals are.

Performance Indicator: Quantiles of Errors
Quantitative in Outcome
Other Information: Quantiles (or percentiles), which is the element of a set that is larger than half of the set, and smaller than the other half.

Performance Indicator: "Almost correct" predictions
Quantitative in Outcome
Goal: Lawful
Ensure AI Systems comply with all applicable laws and regulations, such as, provision audit data defined by a governance operating model
Goal: Ontological Statements
Employ ontological statements when explaining AIKR object audit data, veracity facts and (human, social and technology) risk mitigation factors
Goal: Track
Track AIKR object performance outcome via KPI (Key Performance Indicator) based on supervised learning models measurements
Goal: Document
Document the vision, values, goals, objectives for one or more AIKR objects
Goal: Robust
Ensure AI Systems are designed to handle uncertainty and tolerate perturbation from a likely threat perspective, such as, design considerations incorporate human, social and technology risk factors
Carl
It was a pleasure toclarify


On Thu, May 28, 2020 at 1:22 AM StratNavApp.com <mail@stratnavapp.com> wrote:


Here is an update on your projects on StratNavApp:

To view, update, comment on or respond to any of these, please click on the View link next to it, and edit the item or add a note of your own - not by replying to this email.
 
 For stakeholder: Paul Alagna  view/contribute 
 
In project: StratML for AIKR
 
|    |  On 27/05/2020 at 20:25, Paul Alagna    
   - added the phone number: 7323225641
   - changed the email address:    PJAlagna@Gmail.com
   - added the mobile number: 7323225641
  |

 
 For goal: AI Strategists  view/contribute 
 
In project: StratML for AIKR
 
|    |  On 27/05/2020 at 06:47, Chris Fox wrote:  |
|     
We now have two attempts to define the role of the AI Strategist in StratML format.
   
   - https://www.stratnavapp.com/StratML/Part1/d95875f4-04b2-46d3-a953-c441e16f428a/Styled   
and
   - this one.

I think if we are to say we have achieved this goal, we need to work towards a single one.
     |

 
 For project: Roles of AI Strategists  view/contribute 
 
In project: Roles of AI Strategists
 
|    |  On 27/05/2020 at 06:38, chris@chriscfox.com wrote:  |
|     
We now have two StratML plans for the role of the AI Strategist. This one, and the one we previously discussed at https://www.stratnavapp.com/StratML/Part2/861566c8-e9be-4642-b52f-f673fa499f4e/Styled

I think that the object should be to merge the two so that we only have one.
     |


 You are receiving this message because you are subscribed to StratNavApp.com. To stop receive messages from this service, simply click  unsubscribe or  change your subscription options.


 Copyright Chris C Fox Consulting Limited. Chris C Fox Consulting Limited is registered in England and Wales as a Private Limited Company: Company Number 6939359. Registered Office: Unit 4 Vista Place, Coy Pond Business Park, Ingworth Road, Poole BH12 1JY

  

Received on Thursday, 28 May 2020 15:44:23 UTC