The Cultural Life of Machine Learning

For those of us who enjoy and learn from discussions an interesting workshop
has a cfp which may be worth reading
Too late to submit but still relevant to AI KR imho

https://aoir.org/aoir2018/preconfwrkshop/

*The Cultural Life of Machine Learning: An Incursion into Critical AI
Studies*

Jonathan Roberge, Michael Castelle, and Thomas Crosbie

Machine learning (ML), deep neural networks, differentiable programming and
related contemporary novelties in artificial intelligence (AI) are all
leading to the development of an ambiguous yet efficient narrative
promoting the dominance of a scientific field—as well as a ubiquitous
business model. Indeed, AI is very much in full hype mode. For its
advocates, it represents a ‘tsunami’ (Manning, 2015) or ‘revolution’
(Sejnowski, 2018)—terms indicative of a very performative and promotional,
if not self-fulfilling, discourse. The question, then, is: how are the
social sciences and humanities to dissect such a discourse and make sense
of all its practical implications? So far, the literature on algorithms and
algorithmic cultures has been keen to explore both their broad
socio-economical, political and cultural repercussions, and the ways they
relate to different disciplines, from sociology to communication and
Internet studies. The crucial task ahead is understanding the specific ways
by which the new challenges raised by ML and AI technologies affect this
wider framework. This would imply not only closer collaboration among
disciplines—including those of STS for instance—but also the development of
new critical insights and perspectives. Thus a helpful and precise
pre-conference workshop question could be: what is the best way to develop
a fine-grained yet encompassing field under the name of Critical AI
Studies? We propose to explore three regimes in which ML and 21st-century
AI crystallize and come to justify their existence: (1) epistemology, (2)
agency, and (3) governmentality—each of which generates new challenges as
well as new directions for inquiries.

In terms of epistemology, it is important to recognize that ML and AI are
situated forms of knowledge production, and thus worthy of empirical
examination (Pinch and Bijker, 1987). At present, we only have internal
accounts of the historical development of the machine learning field, which
increasingly reproduce a teleological story of its rise (Rosenblatt, 1958)
and fall (Minsky and Papert 1968; Vapnik 1998) and rise (Hinton 2006),
concluding with the diverse if as-yet unproven applications of deep
learning. Especially problematic in this regard is our understanding of how
these techniques are increasingly hybridized with large-scale training
datasets, specialized graphics-processing hardware, and algorithmic
calculus. The rationale behind contemporary ML finds its expression in a
very specific laboratory culture (Forsythe 1993), with a specific ethos or
model of “open science”. Models trained on the largest datasets of private
corporations are thus made freely available, and subsequently détourned for
the new AI’s semiotic environs of image, speech, and text—promising to make
the epistemically recalcitrant landscapes of unruly and ‘unstructured’ data
newly “manageable”.

As the knowledge-production techniques of ML and AI move further into the
fabric of everyday life, it creates a particularly new form of agency.
Unlike the static, rule-based systems critiqued in a previous generation by
Dreyfus (1972), modern AI models pragmatically unfold as a temporal flow of
decontextualized classifications. What then does agency mean for machine
learners (Mackenzie, 2017)? Performance in this particular case relates to
the power of inferring and predicting outcomes (Burrell, 2016); new kinds
of algorithmic control thus emerge at the junction of meaning-making and
decision-making. The implications of this question are tangible,
particularly as ML becomes more unsupervised and begins to impact on
numerous aspects of daily life. Social media, for instance, are undergoing
radical change, as insightful new actants come to populate the world: Echo
translates your desires into Amazon purchases, and Facebook is now able to
detect suicidal behaviours. In the general domain of work, too, these
actants leave permanent traces—not only on repetitive tasks, but on the
broader intellectual responsibility.

Last but not least, the final regime to explore in this preconference
workshop is governmentality. The politics of ML and AI are still largely to
be outlined, and the question of power for these techniques remains largely
unexplored. Governmentality refers specifically to how a field is
organised—by whom, for what purposes, and through which means and
discourses (Foucault, 1991). As stated above, ML and AI are based on a
model of open science and innovation, in which public actors—such as
governments and universities—are deeply implicated (Etzkowitz and
Leydesdorff, 2000). One problem, however, is that while the algorithms
themselves may be openly available, the datasets on which they rely for
implementation are not—hence the massive advantages for private actors such
as Google or Facebook who control the data, as well as the economical
resources to attract the brightest students in the field. But there is
more: this same open innovation model makes possible the manufacture of
military AI with little regulatory oversight, as is the case for China,
whose government is currently helping to fuel an AI arms race (Simonite
2017). What alternatives or counter-powers could be imagined in these
circumstances? Could ethical considerations stand alone without a proper
and fully developed critical approach to ML and AI? This workshop will try
to address these pressing and interconnected issues.

We welcome all submissions which might profitably connect with one or more
of these three categories of epistemology, agency, and governmentality; but
we welcome other theoretically and/or empirically rich contributions.

We invite interested scholars to submit proposal abstracts, of
approximately 250 words, by 11:59pm on June 30, 2018 to CriticalAI2018 [at]
gmail [dot] com. Proposals may represent works in progress, short position
papers, or more developed research. The format of the workshop will focus
on paper presentations and a keynote, with additional opportunities for
group discussion and reflection.

This preconference workshop will be held at the Urbanisation Culture
Société Research Centre of INRS (Institut national de la recherche
scientifique). The Centre is located at 385 Sherbrooke St E, Montreal, QC,
and is about a 20-minute train ride from the Centre Sheraton on the STM
Orange Line (enter at the Bonaventure stop, exit at Sherbrooke), or about a
30-minute walk along Rue Sherbrooke.

Return to Top of Page <https://aoir.org/aoir2018/preconfwrkshop/#top>

Received on Wednesday, 25 July 2018 14:09:29 UTC