- From: Paola Di Maio <paola.dimaio@gmail.com>
- Date: Sat, 28 Jul 2018 10:57:21 +0530
- To: ProjectParadigm-ICT-Program <metadataportals@yahoo.com>
- Cc: "public-aikr@w3.org" <public-aikr@w3.org>
- Message-ID: <CAMXe=SqxcHUfVT6104JScFfGX54s6MwWzc9h5seg8sH0kBtRjA@mail.gmail.com>
well I dont have a problem with adding a form. in fact you can go ahead and create it. Or dump the form fields here and I ll create them on the other hand, are these other aspects beyond computer science and machine learning 'separate' from AI? >From an integrated socio technical perspective, they are not (assuming you are referring to legislation, policy and ethics and all) Please share with us what categories you are working on and how you picked them- for discussion and what formats you have your category theory stuff, Thanks <https://www.w3.org/community/aikr/> *A bit about me <https://about.me/paoladimaio>* On Fri, Jul 27, 2018 at 11:27 PM, ProjectParadigm-ICT-Program < metadataportals@yahoo.com> wrote: > Why don't we do the following, one form for AI and one for KR and one for > all frameworks for looking at AI and KR in ways incorporating aspects > beyond computer science, machine learning, NLP, reasoning and problem > solving. > > In the end we can then select from form three a set upon which we can all > agree to work on. > > Milton Ponson > GSM: +297 747 8280 > PO Box 1154, Oranjestad > Aruba, Dutch Caribbean > Project Paradigm: Bringing the ICT tools for sustainable development to > all stakeholders worldwide through collaborative research on applied > mathematics, advanced modeling, software and standards development > > > On Thursday, July 26, 2018 11:59 AM, Paola Di Maio <paola.dimaio@gmail.com> > wrote: > > > Milton > thank you > provided nobody disagrees - is the proposed category theory approach ok > for everybody? > can you explain what you mean category theory approach for the purpose > of our proposed process and data collection form? > or for any other aspect of the proposed work? > do we need more categories in the forms? (i ll give editing access to > modify the forms to anyone who is interested to edit the forms) > > For me is OK to include everything that members > think is relevant, provided the data/info we collect and analyzed > is sorted/modelled in some logical way > > I have just been pondering about splitting forms, or even creating more > forms > if necessary to help us achieve that > > > > Dr Paola Di Maio > Center For Technology Ethics > ISTCS.org and Chair: W3C AIKR <https://www.w3.org/community/aikr/> > > > > > *A bit about me <https://about.me/paoladimaio>* > > > > On Thu, Jul 26, 2018 at 9:08 PM, ProjectParadigm-ICT-Program < > metadataportals@yahoo.com> wrote: > > Again, let me emphasize why we need to use a category theory approach at > the meta level to deal with AI and KR. > > It makes possible a unified approach incorporating all disciplines that > have bearing on both subjects. > > In the end all disciplines use domains of discourse at the lowest levels > which result in the use of ontologies. > > It would be very useful to make a list of all disciplines that have a body > of work, in terms of publications that deal with AI and KR, so we can > assess all the relevant aspects investigated so far. > > Review and overview studies would be useful and would normally be found in > science assessment and technology assessment literature. > > Maybe a good general starting point to also include social impacts and > ethics? > > Milton Ponson > GSM: +297 747 8280 > PO Box 1154, Oranjestad > Aruba, Dutch Caribbean > Project Paradigm: Bringing the ICT tools for sustainable development to > all stakeholders worldwide through collaborative research on applied > mathematics, advanced modeling, software and standards development > > > On Wednesday, July 25, 2018 10:09 AM, Paola Di Maio < > paola.dimaio@gmail.com> wrote: > > > For those of us who enjoy and learn from discussions an interesting > workshop > has a cfp which may be worth reading > Too late to submit but still relevant to AI KR imho > > https://aoir.org/aoir2018/ preconfwrkshop/ > <https://aoir.org/aoir2018/preconfwrkshop/> > *The Cultural Life of Machine Learning: An Incursion into Critical AI > Studies* > Jonathan Roberge, Michael Castelle, and Thomas Crosbie > Machine learning (ML), deep neural networks, differentiable programming > and related contemporary novelties in artificial intelligence (AI) are all > leading to the development of an ambiguous yet efficient narrative > promoting the dominance of a scientific field—as well as a ubiquitous > business model. Indeed, AI is very much in full hype mode. For its > advocates, it represents a ‘tsunami’ (Manning, 2015) or ‘revolution’ > (Sejnowski, 2018)—terms indicative of a very performative and promotional, > if not self-fulfilling, discourse. The question, then, is: how are the > social sciences and humanities to dissect such a discourse and make sense > of all its practical implications? So far, the literature on algorithms and > algorithmic cultures has been keen to explore both their broad > socio-economical, political and cultural repercussions, and the ways they > relate to different disciplines, from sociology to communication and > Internet studies. The crucial task ahead is understanding the specific ways > by which the new challenges raised by ML and AI technologies affect this > wider framework. This would imply not only closer collaboration among > disciplines—including those of STS for instance—but also the development of > new critical insights and perspectives. Thus a helpful and precise > pre-conference workshop question could be: what is the best way to develop > a fine-grained yet encompassing field under the name of Critical AI > Studies? We propose to explore three regimes in which ML and 21st-century > AI crystallize and come to justify their existence: (1) epistemology, (2) > agency, and (3) governmentality—each of which generates new challenges as > well as new directions for inquiries. > In terms of epistemology, it is important to recognize that ML and AI are > situated forms of knowledge production, and thus worthy of empirical > examination (Pinch and Bijker, 1987). At present, we only have internal > accounts of the historical development of the machine learning field, which > increasingly reproduce a teleological story of its rise (Rosenblatt, 1958) > and fall (Minsky and Papert 1968; Vapnik 1998) and rise (Hinton 2006), > concluding with the diverse if as-yet unproven applications of deep > learning. Especially problematic in this regard is our understanding of how > these techniques are increasingly hybridized with large-scale training > datasets, specialized graphics-processing hardware, and algorithmic > calculus. The rationale behind contemporary ML finds its expression in a > very specific laboratory culture (Forsythe 1993), with a specific ethos or > model of “open science”. Models trained on the largest datasets of private > corporations are thus made freely available, and subsequently détourned for > the new AI’s semiotic environs of image, speech, and text—promising to make > the epistemically recalcitrant landscapes of unruly and ‘unstructured’ data > newly “manageable”. > As the knowledge-production techniques of ML and AI move further into the > fabric of everyday life, it creates a particularly new form of agency. > Unlike the static, rule-based systems critiqued in a previous generation by > Dreyfus (1972), modern AI models pragmatically unfold as a temporal flow of > decontextualized classifications. What then does agency mean for machine > learners (Mackenzie, 2017)? Performance in this particular case relates to > the power of inferring and predicting outcomes (Burrell, 2016); new kinds > of algorithmic control thus emerge at the junction of meaning-making and > decision-making. The implications of this question are tangible, > particularly as ML becomes more unsupervised and begins to impact on > numerous aspects of daily life. Social media, for instance, are undergoing > radical change, as insightful new actants come to populate the world: Echo > translates your desires into Amazon purchases, and Facebook is now able to > detect suicidal behaviours. In the general domain of work, too, these > actants leave permanent traces—not only on repetitive tasks, but on the > broader intellectual responsibility. > Last but not least, the final regime to explore in this preconference > workshop is governmentality. The politics of ML and AI are still largely to > be outlined, and the question of power for these techniques remains largely > unexplored. Governmentality refers specifically to how a field is > organised—by whom, for what purposes, and through which means and > discourses (Foucault, 1991). As stated above, ML and AI are based on a > model of open science and innovation, in which public actors—such as > governments and universities—are deeply implicated (Etzkowitz and > Leydesdorff, 2000). One problem, however, is that while the algorithms > themselves may be openly available, the datasets on which they rely for > implementation are not—hence the massive advantages for private actors such > as Google or Facebook who control the data, as well as the economical > resources to attract the brightest students in the field. But there is > more: this same open innovation model makes possible the manufacture of > military AI with little regulatory oversight, as is the case for China, > whose government is currently helping to fuel an AI arms race (Simonite > 2017). What alternatives or counter-powers could be imagined in these > circumstances? Could ethical considerations stand alone without a proper > and fully developed critical approach to ML and AI? This workshop will try > to address these pressing and interconnected issues. > We welcome all submissions which might profitably connect with one or more > of these three categories of epistemology, agency, and governmentality; but > we welcome other theoretically and/or empirically rich contributions. > We invite interested scholars to submit proposal abstracts, of > approximately 250 words, by 11:59pm on June 30, 2018 to CriticalAI2018 [at] > gmail [dot] com. Proposals may represent works in progress, short position > papers, or more developed research. The format of the workshop will focus > on paper presentations and a keynote, with additional opportunities for > group discussion and reflection. > This preconference workshop will be held at the Urbanisation Culture > Société Research Centre of INRS (Institut national de la recherche > scientifique). The Centre is located at 385 Sherbrooke St E, Montreal, QC > <https://maps.google.com/?q=385+Sherbrooke+St+E,+Montreal,+QC&entry=gmail&source=g>, > and is about a 20-minute train ride from the Centre Sheraton on the STM > Orange Line (enter at the Bonaventure stop, exit at Sherbrooke), or about a > 30-minute walk along Rue Sherbrooke. > Return to Top of Page <https://aoir.org/aoir2018/preconfwrkshop/#top> > > > > > > > > >
Received on Saturday, 28 July 2018 05:27:47 UTC