Despite AI’s potential for beneficial use, it creates important risks for Australians. AI, big data, and AI-informed decision making can cause exclusion, discrimination, skill loss, and economic impact; and can affect privacy, security of critical infrastructure and social well-being. What types of technology raise particular human rights concerns? Which human rights are particularly implicated?
3. Overview of key AI capabilities
automation of
routine cognitive tasks
4. Overview of key AI capabilities
automation of
routine cognitive tasks
but (1) companies may oversimplify a
“routine” task ignoring important human
judgements, or (2) go well beyond “routine”
to advanced cognition
and this is where the problems lie
6. “From clicks to constructs”
AIDA tools must map from the low level data signals
that computers log, to higher level categories that are
meaningful to humans
Issues: This mapping always makes assumptions,
which may be accidental or intentional, and can be
challenged
7. What can a computer game
tell us about a student’s
“Conscientiousness”?
“From clicks to constructs”
Educational example
8. What can a computer game
tell us about a student’s
“Conscientiousness”?
9. What can a computer game
tell us about a student’s
“Conscientiousness”?
10. Shute, V. J. and M. Ventura (2013). Stealth Assessment: Measuring and
supporting learning in video games. Cambridge, MA, MIT Press.
Figure 5 from report to The John D. and Catherine T. MacArthur Foundation
Reports on Digital Media and Learning
http://myweb.fsu.edu/vshute/pdf/Stealth_Assessment.pdf
What can a computer game
tell us about a student’s
“Conscientiousness”?
11. “From clicks to constructs”
Substitute “student” with “employee”
“Employee is
competent”
“Responsive”
“Expert”
“Team player”
12. Making streams of data meaningful:
Substitute “student” with “employee”
“Employee is
competent”
“Responsive”
“Expert”
“Team player” These are like normal performance
KPIs: a set of behaviours set as
approximate indicators that you
have the qualities the job requires.
(We can debate whether they’re
sensible of course)
13. Making streams of data meaningful:
Substitute “student” with “employee”
“Employee is
competent”
“Responsive”
“Expert”
“Team player”
14. Flawed metrics of teacher quality 14
http://vamboozled.com
https://www.edutopia.org/blog/vams-instructor-effectiveness-unpacking-debate-david-stroupe
“Value Added Models” (VAMs)
are automated assessments
of teacher quality, causing
huge controversy in the US
15. Natural Language
Processing (text analytics)
NLP enables computers to understand human
speech and writing (e.g. Siri; language translation)
Issues: • risk of bias in the data on which it was
trained • how effectively it feeds back to the user •
whether it is clear if you’re talking to a human or AI
(e.g. over the phone)
17. Shortlisting for interviews: Resumé analysis 17
https://www.weforum.org/agenda/2019/05/ai-assisted-recruitment-is-biased-heres-how-to-beat-it/
Examples are now being
documented of systematic
gender and racial bias in
automated systems, because
the training data of ‘desirable
employees’ described a white,
male world.
18. Personal assistant example: booking a table
18
Impressive but controversial Google demo in which an AI system booked a table over the phone
https://youtu.be/-RHG5DFAjp8
19. Predictive modelling
“Based on the past, statistically speaking, we expect
the future to look like this…”
Issues: What if there are good reasons to think that
the past is not, or should not be, a good guide to the
future? • If data from the past has biases that we now
recognise, we should not perpetuate those.
20. Predicting future criminality 20
https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
Statistical model powering a
product used in US courts, to
judge the likelihood of reoffending,
has been demonstrated to be
systematically biased against
blacks
21. Image and video analysis
Machine learning enables computers to classify
images or videos
Issues: As with any machine learning system, like a
child, it learns based on what examples it’s shown. If
the examples are a distorted view of the world, it will
learn and perpetuate those categories.
22. Facial recognition is becoming a commodity service22
https://aws.amazon.com/rekognition/
https://medium.com/@Joy.Buolamwini/response-racial-and-gender-bias-in-amazon-rekognition-commercial-ai-system-for-analyzing-faces-a289222eeced
23. So, in the future, we’ll be
able to track all racial
features at equally high
definition (fairness)
— nobody will be left out of
the surveillance (ethics)
https://blogs.microsoft.com/ai/gender-skin-tone-facial-recognition-improvement/
25. Profiling & recommendation
A user is guided towards a choice based upon a profile
of similar users and/or past behaviour
Issues: Profiles can be poor and the recommended
service or product undesired. Sometimes the user cannot
overrule the recommendation. Sometimes doing so is
annoying or difficult.
28. Trade Unions: both
resisting and engaging
with AI
Issues: We need trust-building conversations for an
informed dialogue, and reskilling pathways
29. Teaching union concerns about AI-powered learning platforms
https://twitter.com/AGavrielatos/status/1121704316069236739
https://twitter.com/hashtag/TellPearson
30. https://www.weforum.org/reports/the-future-of-jobs-report-2018
“Insufficient reskilling and upskilling:
Employers indicate that they are set to
prioritize and focus their re- and
upskilling efforts on employees
currently performing high-value
roles as a way of strengthening their
enterprise’s strategic capacity
[...]
In other words, those most in
need of reskilling and upskilling
are least likely to receive such
training.” (p.ix)
31. RSA Future Work Centre
https://www.thersa.org/action-and-research/rsa-projects/economy-enterprise-manufacturing-folder/the-future-of-work
Excellent resources at sites such as...
32. AI Now Institute Data & Society Institute
https://ainowinstitute.org https://datasociety.net
Excellent resources at sites such as...
33. There are ways to empower employees if they are
given a genuine voice in the AIDA design process
34