I started working with user experience (UX) long before the term was even known. Over the past 40 years, I’ve encountered many issues that have disturbed me – from creating purposely addictive programs, sites, and apps, to the current zeitgeist for various design trends at the expense of basic usability. I have seen research that is faked, ignored, or twisted by internal company politics and by the cognitive bias of the design team. And I have seen countless dark patterns that suppress accessibility and diversity by promoting false beliefs and false security.
Whenever we say, “That’s not my problem,” or, “My company won’t let me do that,” we are handing over our ethical responsibility to someone else – for better or for worse. Do innocent decisions evolve so that they promote racism or gender discrimination through inadvertent cognitive bias or unwitting apathy? Far too often they do.
We, as technologists, hold incredible power to shape the things to come. I would like to share my thoughts with you so you can use this power to truly build a better world for those who come after us!
The Ethics of AI – dealing with difficult choices in a non-binary world
1. The Ethics of AI
Dealing with difficult choices
in a non-binary world
Eric Reiss
@elreiss
ML Conference
June 18, 2019
Munich, Germany
2. Disclaimer
Absolutely no attempt has been made
to make this presentation politically correct.
No animals were harmed during the production
of this PowerPoint (even though I tried).
Made entirely of recycled electrons.
3. Disclaimer #2
If you’ve heard this all before, my apologies.
If you’ve ignored these issues, step up and take a stand.
18. • It’s not just “1” and “0”
• Or right and wrong
• Or “yes” and “no”
• Or “black” and “white”
The world is grey and difficult. Learn to live with it.
The world isn’t binary
39. • Privacy
• Security
• Intellectual property rights
plus
• Diversity
• Inclusion
• Harassment
Key ethical issues today
40. • Is this right?
• Is this respectful?
• Is this responsible?
• Is this fair?
• Is this legal?
Questions we need to ask
41. 1. Manipulating the research
2. Faking the content
3. Promoting addiction
4. Dark patterns
5. Faking communities of thought
6. Offensive AI
7. AI theatre
Seven deadly sins
42.
43.
44.
45. • Monitor your database (more in a moment)
• Watch out for personal or political agendas
• Think about your own moral and ethical responsibility
• Call bullshit when you see it!
What you can do
46. • Being principled is challenging
• There are consequences to your actions
• Be gentle if you can
• The greater the ethical violation, the harder
you need to push
• Sometimes, it’s good to get fired
Some thoughts on “calling bullshit”
50. • Validate your assumptions
• Test your prototypes, apps, and existing sites
with real users
• Mine the existing data for genuine insights
• Check for cultural bias
– Racist, religious, and sexist discrimination
• Train your algorithm with unbiased data
• Monitor your AI bot regularly
What you can do
51. Clive K. Lavery | “Being a Digital Do-gooder”| 27 September, 2016
52. Clive K. Lavery | “Being a Digital Do-gooder”| 27 September, 2016
53. "There is no reason for
any individual to have
a computer in his home."
Ken Olsen
Founder, Digital Equipment Corp.
1977
But that was then…