Re: TED talk on algorithm bias

Hi Alastair,

Yes, there was a focus on AI there, but the main point on bias was
illustrative. I think the issue boils down to the implication of the talk
(and breath-taking tile - *Weapons of Math Destruction*) that ALL
algorithms are biased versus being aware of bias as we develop an algorithm
was, if not lost, quite convoluted.

There is bias everywhere, whether human or AI implied. We, as accessibility
experts, are biased from our perspective. Advocates for any particular
disability group are biased from *their* perspective. In the first video,
even the "listeners of the orchestra auditions" were biased - and in the
video, two ways: initially they were biased by seeing the audition
candidate, and then, when that candidate was placed behind the curtain,
they were biased on what they heard (how they heard it, their judgement of
what they heard, etc. - if you are a fan of any of the reality singing
shows, we also know that even song selection matters!)

My frustration stems from the fact that the work we are engaged in -
improving access to people with various disabilities - will also be biased,
based on perspective. Rather than ignoring that bias, we need to be aware
of it, and then hopefully act accordingly. We aren't going to be perfect,
individual users will still be negatively impacted (to some degree or
another), and I struggle with that. But, rather than ignore bias, we have
to work with it, understand it, and try and address it. And we also have to
accept that Silver will not miraculously address all needs of all users
equitably, as much as that sucks or isn't fair. (What's the common
expression? "Don't let perfect be the enemy of good"?)

In the end, the scoring of Silver (the topic which I suspect prompted the
sharing of these videos at this time) will require an algorithm - I cannot
see any way out of that. Rather than posting videos that suggest that
algorithms are *Weapons of Math Destruction*, we should perhaps focus on
ensuring we address as many needs of as many different types of disability
as we can - in part by recognizing bias, but also by using that bias in a
positive and instructive way.

Rather than ignore the needs of different disability user-groups (and
adjust our algorithm to account for that), I've been told that when we look
at the needs of different user-groups (as part of any algorithm
calculation) that my (our?) bias will still leave some users behind by
doing just that, and that the scoring mechanism should not be using that
idea as part of the variables in the algorithm. It's like me telling the
orchestra audition folks that putting up the screen is introducing a new
kind of bias (it did BTW), and we shouldn't be doing that either, because
it disadvantages male applicants...

JF

On Thu, Jul 11, 2019 at 4:49 AM Alastair Campbell <acampbell@nomensa.com>
wrote:

> Hi John,
>
>
>
> I’m missing a bit of context on this but I read the book a while ago and I
> think it’s worth drawing out the difference between:
>
>    1. A human-written algorithm, and
>    2. A machine-learning (ML) generated algorithm.
>
>
>
> Both can be biased, but in current implementations the ML versions are
> black-boxes. There is no transparency, after the learning phase you ask it
> a question and get an answer. If it is biased you have no means of knowing
> in what way (apart from analysing the answers separately).
>
>
>
> The book is a polemic, but the core problem is real because it is based on
> a logical extension of garbage-in garbage out. If the data (e.g. a real
> life situation) is biased, the input the ML uses is biased and it will
> continue that bias. That’s how it works.
>
> For example, if the current data shows that people with disabilities are
> less likely to have a job (due to discrimination or otherwise), an ML-based
> assessment of job applicants would embed that bias unless some action is
> taken to prevent it.
>
>
>
> I’m very interested in the work Judy mentioned, I’ll read up on that. A
> couple of days ago I sketched a normal distribution to indicate UX work,
> and then flattened it to indicate accessibility work, so that metaphor has
> been bouncing around my brain for a while!
>
>
>
> What I’m not sure about is how this applies to accessibility guidelines. I
> have assumed so far we’d be talking about explicit & transparent
> calculations / algorithms, rather than ML?
>
>
>
> Cheers,
>
>
>
> -Alastair
>
>
>
>
>
> *From: *John Foliot
>
>
>
> Thanks Jeanne for sharing these. I've not spend the requisite time with
> the longer video, but did review the shorter one.
>
>
>
> I have to say that I am personally concerned by the seemingly definitive
> declaration that "...blind faith in big data must end..." as nobody
> (certainly not me) has suggested that we put blind faith in anything. But
> data is what we are working on and with; it is in many ways our stock in
> trade, and is what "measurement" is all about. Measurement is data (whether
> big or small).
>
>
>


-- 
*​John Foliot* | Principal Accessibility Strategist | W3C AC Representative
Deque Systems - Accessibility for Good
deque.com

Received on Thursday, 11 July 2019 13:14:02 UTC