Re: Can AI be trusthworthy

Hi Paolo,

One of the reasons I like what Charles Sanders Peirce has to say about 
things related to knowledge representation is his views on 
'fallibility'. It is our natural circumstance. The scientific method 
will help us move to improved understanding and an approach to the limit 
function of 'infallibility', but we will never reach it. I like the 
message both about questioning and striving, and humility that our 
investigations will always be incomplete.

Mike

On 6/16/2020 8:34 PM, Paola Di Maio wrote:
> Thanks for sharing. The bottom line perhaps is can humans, or anything 
> be trustworthy? Especially when most of the tricks we do play upon 
> ourselves - in the sense that I have never seen a model of 
> infallibility in the human realm
>
> On Wed, Jun 17, 2020 at 12:30 AM ProjectParadigm-ICT-Program 
> <metadataportals@yahoo.com <mailto:metadataportals@yahoo.com>> wrote:
>
>     An excerpt from the EU EDRI watchdog organizations.
>
>     EDRi-gram
>
>     fortnightly newsletter about digital civil rights in Europe
>
>     EDRi-gram 18.11, 10 June 2020
>     Read online: https://edri.org/edri-gram/18-11/
>
>     =======================================================================
>     Contents
>     =======================================================================
>
>     1. Can the EU make AI “trustworthy”? No – but they can make it just
>     2. COVID-Tech: the sinister consequences of immunity passports
>     3. UK: Stop social media monitoring by local authorities
>     4. Cryptocurrency scammers flood Facebook users with manipulative ads
>     5. SHARE’s campaign bears fruit: Google appoints Serbian
>     representatives
>     6. Recommended Action
>     7. Recommended Reading
>     8. Agenda
>     9. About
>
>     =======================================================================
>     1. Can the EU make AI “trustworthy”? No – but they can make it just
>     =======================================================================
>
>     Today, 4 June 2020, European Digital Rights (EDRi) submitted its
>     answer
>     to the European Commission’s consultation on the AI White Paper.
>     On top
>     of our response, in our additional paper we outline recommendations to
>     the European Commission for a fundamental rights- based AI regulation.
>     You can find our consultation response, recommendations paper, and
>     answering guide for the public here.
>
>     How to ensure a “trustworthy AI” has been highly debated since the
>     European Commission launched its White Paper on AI in February this
>     year. Policymakers and industry have hosted numerous conversations
>     about
>     “innovation”, “Europe becoming a leader in AI”, and promoting a “Fair
>     AI”.
>
>     Yet, a “fair” or “trustworthy” artificial intelligence seems a far way
>     off. As governments, institutions and industry swiftly move to
>     incorporate AI into their systems and decision-making processes -
>     grave
>     concerns remain as to how these changes will impact people, democracy
>     and society as a whole.
>
>     EDRi’s response outlines the main risks AI poses for people,
>     communities
>     and society, and outlines recommendations for an improved, truly
>     ‘human-centric’ legislative proposal on AI. We argue that the EU must
>     reinforce the protections already embedded in the General Data
>     Protection Regulation (GDPR), outline clear legal limits for AI by
>     focusing on impermissible use, and foreground principles of collective
>     impact, democratic oversight, accountability, and fundamental rights.
>     Here’s a summary of our main points.
>
>     Put people before industrial policy
>
>     A ‘human centric’ approach to AI requires that considerations of
>     safety,
>     equality, privacy, and fundamental rights are the primary factors
>     underpinning decisions as to whether to promote or invest in AI.
>
>     However, the European Commission’s White Paper proposal takes as a
>     point
>     of a departure the inherent economic benefits of promoting AI,
>     particularly in the public sector. Promoting AI in the public
>     sector as
>     a whole, without requiring scientific evidence to justify the need or
>     the purpose of such applications in some potentially harmful
>     situations,
>     is likely to have the most direct consequences on everyday peoples’
>     lives, particularly on marginalised groups.
>
>     Despite wide ranging applications that could advance our societies
>     (such
>     as some uses in the field of health), we have also seen the vast
>     negative impacts of automated systems at play at the border, in
>     predictive policing systems which exacerbate overpolicing of
>     racialised
>     communities, in ‘fraud detection’ systems which target poor, working
>     class and migrant areas, and countless more examples. All such
>     examples
>     highlight the potentially devastating consequences AI systems can have
>     in the public sector, contesting the case for ‘promoting the uptake of
>     AI.’ These examples highlight the need for AI regulation to be
>     rooted in
>     a human-centric approach.
>
>     "The development of artificial intelligence technology offers huge
>     potential opportunities for improving our economies and societies, but
>     also extreme risks. Poorly-designed and governed AI will exacerbate
>     power imbalances and inequality, increase discrimination, invade
>     privacy
>     and undermine a whole host of other rights. EU legislation must ensure
>     that cannot happen. Nobody's rights should be sacrificed on the
>     altar of
>     innovation." said Chris Jones, Statewatch.
>
>     Address collective harms of AI
>
>     The vast potential scale and impact AI systems challenges existing
>     conceptions of harm. Whilst in many ways we can view the challenges
>     posed by AI as fundamental rights issues, often the harms perpetrated
>     are much broader, disadvantaging communities, economy, democracy and
>     entire societies. From the impending threat of mass surveillance
>     as a a
>     result of biometric processing in publicly-accessible spaces, to
>     the use
>     of automated systems or ‘upload filters’ to moderate content on social
>     media, to severe disruptions to the democratic process, we see the
>     impact goes far beyond the level of the individual. One specificity of
>     regulating AI is the need to address societal-level harms.
>
>     Prevent harms by focusing on impermissible use
>
>     Just as the problems with AI are collective and structural, so must be
>     the solutions. The European Commission’s White Paper outlines some
>     safeguards to address ‘high-risk’ AI, such as training data to correct
>     for bias and ensuring human oversight. Whilst these safeguards are
>     crucial, they will not address the irreparable harms which will result
>     from a number of uses of AI.
>
>     “The EU must move beyond technical fixes for the complex problems
>     posed
>     by AI. Instead, the upcoming AI regulation must determine the legal
>     limits, impermissible uses or ‘red-lines’ for AI applications.
>     This is a
>     necessary step for a people-centered, fundamental rights-based AI”
>     says
>     Sarah Chander, Senior Policy Adviser, EDRi.
>
>     The EDRi network lists some of the impermissible uses of AI:
>
>     - indiscriminate biometric surveillance and biometric capture and
>     processing in public spaces [1]
>     - use of AI to solely determine access to or delivery of essential
>     public services (such as social security, policing, migration control)
>     - uses of AI which purport to identify, analyse and assess emotion,
>     mood, behaviour, and sensitive identity traits (such as race,
>     disability) in the delivery of essential services
>     - predictive policing
>     - autonomous lethal weapons and other uses which identify targets for
>     lethal force (such as law and immigration enforcement)
>
>     "The EU must ensure that states and companies meet their
>     obligations and
>     responsibilities to respect and promote human rights in the context of
>     automated decision-making systems. EU institutions and national
>     policymakers must explicitly recognise that there are legal limits to
>     the use and impact of automation. No safeguard or remedy would make
>     indiscriminate biometric surveillance or predictive policing
>     acceptable,
>     justified or compatible with human rights" said Fanny Hidvegi, Europe
>     Policy Manager at Access Now.
>
>     Require democratic oversight for AI in the public sphere
>
>     The rapidly increasing deployment of AI systems presents a major
>     governance issue. Due to the (designed) opacity of the systems, the
>     complete lack of transparency from governments when such systems are
>     deployed for use in public, essential functions, and the
>     systematic lack
>     of democratic oversight and engagement – AI is furthering the ‘power
>     asymmetry between those who develop and employ AI technologies, and
>     those who interact with and are subject to them [2].
>
>     As a result, decisions impacting public services will be more opaque,
>     increasingly privately owned, and even less subject to democratic
>     oversight. It is vital that the EU’s regulatory proposal on AI
>     addresses
>     this – implementing mandatory measures of democratic oversight for the
>     procurement and deployment of AI in the public sector and essential
>     services. More, the EU must explore methods of direct public
>     engagement
>     on AI systems. In this regard, authorities should be required to
>     specifically consult marginalised groups likely to be
>     disproportionately
>     impacted by automated systems.
>
>     Implement the strongest possible fundamental rights protections
>
>     Regulation on AI must reinforce, rather than replace, the protections
>     already embedded in the General Data Protection Regulation (GDPR). The
>     European Commission has the opportunity to complement these
>     protections
>     with safeguards for AI. To put people first and provide the strongest
>     possible protections, all systems should complete mandatory human
>     rights
>     impact assessments. This assessment should evaluate the collective,
>     societal, institutional and governance implications the system poses,
>     and outline adequate steps to mitigate this.
>
>     "The deployment of such systems for predictive purposes comes with
>     high
>     risks on human rights violations. Introducing ethical guidelines &
>     standards for the design and deployment of these tools is welcome, but
>     not enough. Instead, we need the European Union and Member States to
>     ensure compliance with the applicable regulatory frameworks, and draw
>     clear legal limits to ensure AI is always compatible with fundamental
>     rights." says Eleftherios Chelioudakis from Homo Digitalis.
>
>     EDRi’s position calls for fundamental rights to be prioritised in the
>     regulatory proposal for all AI systems, not only those categorised as
>     ‘high-risk’. We argue AI regulation should avoid creating
>     loop-holes or
>     exemptions based on sector, size of enterprise, or whether or not the
>     system is deployed in the public sector.
>
>     "It is crucial for the EU to recognize that the adoption of AI
>     applications is not inevitable. The design, development and deployment
>     of systems must be tested against human rights standards in order to
>     establish their appropriate and acceptable use. Red lines are thus an
>     important piece of the AI governance puzzle. Recognizing impermissible
>     use at the outset is particularly important because of the
>     disproportionate, unequal and sometimes irreversible ways in which
>     automated decision making systems impact societies." said Vidushi
>     Marda,
>     Senior Programme Officer, at ARTICLE 19
>
>     The rapid uptake of AI will fundamentally change our society. From a
>     human rights’ perspective, AI systems have the ability to exacerbate
>     surveillance and intrusion into our personal lives, fundamentally
>     alter
>     the delivery of public and essential services, vastly undermine vital
>     data protection legislation, and disrupt the democratic process.
>
>     For some, AI will mean reinforced, deeper harms as such systems
>     feed and
>     embed existing processes of marginalisation. For all, the route to
>     remedies, accountability, and justice will be ever-more unclear,
>     as this
>     power asymmetry further shifts to private actors, and public goods and
>     services will be not only automated, but privately owned.
>
>     There is no “trustworthy AI” without clear red-lines for impermissable
>     use, democratic oversight, and a truly fundamental rights-based
>     approach
>     to AI regulation. The European Union’s upcoming legislative
>     proposal on
>     artificial intelligence (AI) is a major opportunity change this; to
>     protect people and democracy from the escalating economic,
>     political and
>     social issues posed by AI.
>
>     Footnotes:
>
>     [1] EDRi (2020). ‘Ban Biometric Mass Surveillance: A set of
>     fundamental
>     rights demands for the European Commission and Member States’
>     https://edri.org/wp-content/uploads/2020/05/Paper-Ban-Biometric-Mass-Surveillance.pdf
>
>
>     [2] Council of Europe  (2019). ‘Responsibility and AI  DGI(2019)05
>     Rapporteur: Karen Yeung
>     https://rm.coe.int/responsability-and-ai-en/168097d9c5
>
>     Read more:
>
>     EDRi Consultation Response: European Commission Consultation on the
>     White Paper on Artificial Intelligence
>     https://edri.org/wp-content/uploads/2020/06/AI_EDRiConsultationResponse.pdf
>
>     EDRI Recommendations for a Fundamental-rights based Artificial
>     Intelligence Regulation: Addressing collective harms, democratic
>     oversight and impermissable use
>     https://edri.org/wp-content/uploads/2020/06/AI_EDRiRecommendations.pdf
>
>     Access Now Consultation Response: European Commission Consultation on
>     the White Paper on Artificial Intelligence
>     https://www.accessnow.org/EU-white-paper-consultation
>
>     Bits of Freedom (2020). ‘Facial recognition: A convenient and
>     efficient
>     solution, looking for a problem?’
>     https://www.bitsoffreedom.nl/2020/01/29/facial-recognition-a-convenient-and-efficient-solution-looking-for-a-problem/
>
>
>     EDRi (2020). ‘Ban Biometric Mass Surveillance: A set of fundamental
>     rights demands for the European Commission and Member States’
>     https://edri.org/wp-content/uploads/2020/05/Paper-Ban-Biometric-Mass-Surveillance.pdf
>
>
>     Privacy International and Article 19 (2018). ‘Privacy and Freedom of
>     Expression in the Age of Artificial Intelligence’
>     https://www.article19.org/wp-content/uploads/2018/04/Privacy-and-Freedom-of-Expression-In-the-Age-of-Artificial-Intelligence-1.pdf
>
>
>     =======================================================================
>     2. COVID-Tech: the sinister consequences of immunity passports
>
>
>     Milton Ponson
>     GSM: +297 747 8280
>     PO Box 1154, Oranjestad
>     Aruba, Dutch Caribbean
>     Project Paradigm: Bringing the ICT tools for sustainable
>     development to all stakeholders worldwide through collaborative
>     research on applied mathematics, advanced modeling, software and
>     standards development
>
-- 
__________________________________________

Michael K. Bergman
Cognonto Corporation
319.621.5225
skype:michaelkbergman
http://cognonto.com
http://mkbergman.com
http://www.linkedin.com/in/mkbergman
__________________________________________

Received on Wednesday, 17 June 2020 01:50:53 UTC