- From: Janina Sajka <janina@rednote.net>
- Date: Wed, 27 Oct 2021 09:11:11 -0400
- To: W3C WAI Accessible Platform Architectures <public-apa@w3.org>, W3C WAI ARIA <public-aria@w3.org>, public-pronunciation@w3.org
Colleagues:
During our 2nd session I referenced a book that discussed how biasses
enter into airtificla intelligence systems, often unintentionally, and
how design decisions can entrench tradeoffs. I have found the title and
followup with that info here:
Title: alignment problem: machine learning and human values
Author: Christian, Brian
ISBN-13: 978-0393635829
ISBN-10: 0393635821
Link at Amazon is:
https://www.amazon.com/Alignment-Problem-Machine-Learning-Values/dp/0393635821
It's available on BookShare and from NLS for those with access to those
resources.
Best,
Janina
--
Janina Sajka
https://linkedin.com/in/jsajka
Linux Foundation Fellow
Executive Chair, Accessibility Workgroup: http://a11y.org
The World Wide Web Consortium (W3C), Web Accessibility Initiative (WAI)
Co-Chair, Accessible Platform Architectures http://www.w3.org/wai/apa
Received on Wednesday, 27 October 2021 13:11:28 UTC