SlideShare a Scribd company logo
1 of 47
Download to read offline
Detecting Algorithmic
Discrimination
15th Dutch-Belgian Information Retrieval Workshop, November 2016
Carlos Castillo @chatox
Based on joint KDD 2016 tutorial
with Sara Hajian and Francesco Bonchi
1
Part I
Part I Algorithmic Discrimination Concepts
Part II Algorithmic Discrimination Discovery
2
Part I: algorithmic discrimination concepts
Motivation and examples of algorithmic bias
Sources of algorithmic bias
Legal definitions and principles of discrimination
Measures of discrimination
Discrimination and privacy
3
A dangerous reasoning
To discriminate is to treat someone differently
(Unfair) discrimination is based on group membership, not individual merit
People's decisions include objective and subjective elements
Hence, they can be discriminate
Algorithmic inputs include only objective elements
Hence, they cannot discriminate?
4
Judiciary use of COMPAS scores
COMPAS (Correctional Offender Management Profiling for Alternative Sanctions):
137-questions questionnaire and predictive model for "risk of recidivism"
Prediction accuracy of recidivism for blacks and whites is about 60%, but ...
● Blacks that did not reoffend
were classified as high risk twice as much as whites that did not reoffend
● Whites who did reoffend
were classified as low risk twice as much as blacks who did reoffend
5
Pro Publica, May 2016. https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm
On the web: race and gender stereotypes reinforced
● Results for "CEO" in Google Images: 11% female, US 27% female CEOs
○ Also in Google Images, "doctors" are mostly male, "nurses" are mostly female
● Google search results for professional vs. unprofessional hairstyles for work
Image results:
"Unprofessional
hair for work"
Image results:
"Professional
hair for work"
6M. Kay, C. Matuszek, S. Munson (2015): Unequal Representation and Gender Stereotypes in Image Search Results for Occupations. CHI'15.
The dark side of Google Ads
AdFisher: tool to automate the creation of behavioral
and demographic profiles.
Used to demonstrate that setting gender = female
results in less ads for high-paying jobs.
A. Datta, M. C. Tschantz, and A. Datta (2015). Automated experiments on ad privacy settings.
Proceedings on Privacy Enhancing Technologies, 2015(1):92–112.
7
Self-perpetuating algorithmic biases
Ranking algorithm for finding candidates puts Maria at the bottom of the list
Hence, Maria gets less jobs offers
Hence, Maria gets less job experience
Hence, Maria becomes less employable
The same spiral happens in credit scoring, and in
stop-and-frisk of minorities that increases incarceration rates
8
To make things worse ...
Algorithms are "black boxes" protected by
Industrial secrecy
Legal protections
Intentional obfuscation
Discrimination becomes invisible
Mitigation becomes impossible
9F. Pasquale (2015): The Black Box Society. Harvard University Press.
Weapons of Math Destruction (WMDs)
"[We] treat mathematical models as a neutral and
inevitable force, like the weather or the tides, we
abdicate our responsibility" - Cathy O'Neil
Three main ingredients of a "WMD":
● Opacity
● Scale
● Damage
10C. O'Neil (2016): Weapons of Math Destruction. Crown Random House
Part I: algorithmic discrimination concepts
Motivation and examples of algorithmic bias
Sources of algorithmic bias
Legal definitions and principles of discrimination
Measures of discrimination
Discrimination and privacy
11
Two areas of concern: data and algorithms
Data inputs:
● Poorly selected (e.g., observe only car trips, not bicycle trips)
● Incomplete, incorrect, or outdated
● Selected with bias (e.g., smartphone users)
● Perpetuating and promoting historical biases (e.g., hiring people that "fit the culture")
Algorithmic processing:
● Poorly designed matching systems
● Personalization and recommendation services that narrow instead of expand user options
● Decision making systems that assume correlation implies causation
● Algorithms that do not compensate for datasets that disproportionately represent populations
● Output models that are hard to understand or explain hinder detection and mitigation of bias
12Executive Office of the US President (May 2016): "Big Data: A Report on Algorithmic Systems,Opportunity,and Civil Rights"
Detecting algorithmic discrimination
Motivation and examples of algorithmic bias
Sources of algorithmic bias
Legal definitions and principles of discrimination
Measures of discrimination
Discrimination and privacy
13
Legal concepts
Anti-discrimination legislation typically seeks equal access to employment,
working conditions, education, social protection, goods, and services
Anti-discrimination legislation is very diverse and includes many legal concepts:
Genuine occupational requirement (male actor to portray male character)
Disparate impact and disparate treatment
Burden of proof and situation testing
Group under-representation principle
14
Discrimination: treatment vs impact
Modern legal frameworks offer various levels of protection for being discriminated
by belonging to a particular class of: gender, age, ethnicity, nationality, disability,
religious beliefs, and/or sexual orientation
Disparate treatment:
Treatment depends on class membership
Disparate impact:
Outcome depends on class membership
Even if (apparently?) people are treated the same way
15
Proving discrimination is hard
The burden of proof can be shared for discrimination cases:
the accuser must produce evidence of the consequences,
the defendant must produce evidence that the process was fair
(This is the criterion of the European Court of Justice)
16Migration Policy Group and Swedish Centre for Equal Rights (2009): Proving discrimination cases - the role of situational testing.
Part I: algorithmic discrimination concepts
Motivation and examples of algorithmic bias
Sources of algorithmic bias
Legal definitions and principles of discrimination
Measures of discrimination
Discrimination and privacy
17
Principles for quantifying discrimination
Two basic frameworks for measuring discrimination:
Discrimination at the individual level: consistency or individual fairness
Discrimination at the group level: statistical parity
18
I. Žliobaitė (2015): A survey on measuring indirect discrimination in machine learning. arXiv pre-print.
Individual fairness is about consistency
Consistency score
C = 1 - ∑i
∑yj∊knn(yi)
|yi
- yj
|
Where knn(yi
) = k nearest neighbors of yi
A consistent or individually fair algorithm is one in which similar people experience
similar outcomes … but note that perhaps they are all treated equally badly
19
Richard S. Zemel, Yu Wu, Kevin Swersky, Toniann Pitassi, and Cynthia Dwork. 2013. Learning Fair Representations. In Proc. of the 30th Int.
Conf. on Machine Learning. 325–333.
Group fairness is about statistical parity
Example:
"Protected group" ~ "people with disabilities"
"Benefit granted" ~ "getting a scholarship"
20
Intuitively, if
a/n1
, the fraction of people with disabilities that does not get a scholarship
is much larger than
c/n1
, the fraction of people without disabilities that does not get a scholarship,
then people with disabilities could claim they are being discriminated.
D. Pedreschi, S. Ruggieri, F. Turini: A Study of Top-K Measures for Discrimination Discovery. SAC 2012.
Simple discrimination measures
These measures compare the protected group
against the unprotected group:
● Risk difference = RD = p1
- p2
● Risk ratio or relative risk = RR = p1
/ p2
● Relative chance = RC = (1-p1
) / (1-p2
)
● Odds ratio = RR/RC
21
D. Pedreschi, S. Ruggieri, F. Turini: A Study of Top-K Measures for Discrimination Discovery. SAC 2012.
Simple discrimination measures
These measures compare the protected group
against the unprotected group:
● Risk difference = RD = p1
- p2
● Risk ratio or relative risk = RR = p1
/ p2
● Relative chance = RC = (1-p1
) / (1-p2
)
● Odds ratio = RR/RC
22
D. Pedreschi, S. Ruggieri, F. Turini: A Study of Top-K Measures for Discrimination Discovery. SAC 2012.
Mentioned in UK law
Mentioned by EU Court of Justice
US courts focus on selection rates:
(1-p1
) and (1-p2
)
There are many other measures of discrimination!
Part I: algorithmic discrimination concepts
Motivation and examples of algorithmic bias
Sources of algorithmic bias
Legal definitions and principles of discrimination
Measures of discrimination
Discrimination and privacy
23
A connection between privacy and discrimination
Finding if people having attribute X were discriminated is like inferring attribute X
from a database in which:
the attribute X was removed
a new attribute (the decision), which is based on X, was added
This is similar to trying to reconstruct a column from a privacy-scrubbed dataset
24
Part II
Part I Algorithmic Discrimination Concepts
Part II Algorithmic Discrimination Discovery
25
Discrimination discovery
Definition
Data mining approaches
Discriminatory rankings
26
The discrimination discovery task at a glance
Given a large database of historical decision records,
find discriminatory situations and practices.
27S. Ruggieri, D. Pedreschi and F. Turini (2010). DCUBE: Discrimination discovery in databases. In SIGMOD, pp. 1127-1130.
Discrimination discovery
Definition
Data mining approaches
Discriminatory rankings
28
Discrimination discovery
Definition
Data mining approaches
Discriminatory rankings
29
Classification rule mining Group discr.
k-NN classification Individual discr.
Bayesian networks Individual discr.
Probabilistic causation Ind./Group discr.
Privacy attack strategies Group discr.
Predictability approach Group discr.
Discrimination discovery
Definition
Data mining approaches
Discriminatory rankings
30
Classification rule mining Group discr.
k-NN classification Individual discr.
Bayesian networks Individual discr.
Probabilistic causation Ind./Group discr.
Privacy attack strategies Group discr.
Predictability approach Group discr.
See KDD 2016 tutorial
by S. Hajian, F. Bonchi, C. Castillo
Discrimination discovery
Definition
Data mining approaches
Discriminatory rankings
31
Classification rule mining
k-NN classification
D. Pedreschi, S. Ruggieri and F. Turini (2008). Discrimination-aware data mining. In KDD'08.
D. Pedreschi, S. Ruggieri, and F. Turini (2009). Measuring discrimination in socially-sensitive decision records. In SDM'09.
S. Ruggieri, D. Pedreschi, and F. Turini (2010). Data mining for discrimination discovery. In TKDD 4(2).
Defining potentially discriminated (PD) groups
A subset of attribute values are perceived as potentially discriminatory based on
background knowledge. Potentially discriminated groups are people with those
attribute values.
Examples:
Female gender
Ethnic minority (racism) or minority language
Specific age range (ageism)
Specific sexual orientation (homophobia)
32
Direct discrimination
Direct discrimination implies rules or procedures
that impose ‘disproportionate burdens’ on minorities
PD rules are any classification rule of the form:
A, B → C
where A is a PD group (B is called a "context")
33
Example:
gender="female", saving_status="no known savings"
→ credit=no
Indirect discrimination
Indirect discrimination implies rules or procedures
that impose ‘disproportionate burdens’ on minorities,
though not explicitly using discriminatory attributes
Potentially non-discriminatory (PND) rules may
unveil discrimination, and are of the form:
D, B → C where D is a PND group
34
Example:
neighborhood="10451", city="NYC"
→ credit=no
Indirect discrimination example
Suppose we know that with high confidence:
(a) neighborhood=10451, city=NYC → benefit=deny
But we also know that with high confidence:
(b) neighborhood=10451, city=NYC → race=black
Hence:
(c) race=black, neighborhood=10451, city=NYC → benefit=deny
Rule (b) is background knowledge that allows us to infer (c), which shows that
rule (a) is indirectly discriminating against blacks
35
Discrimination discovery
Definition
Data mining approaches
Discriminatory rankings
36
B. T. Luong, S. Ruggieri, and F. Turini (2011). k-NN as an implementation of situation testing for discrimination discovery and prevention. KDD'11
Classification rule mining
k-NN classification
● Legal approach for creating controlled experiments
● Matched pairs undergo the same situation, e.g. apply for a job
○ Same characteristics apart from the discrimination ground
37
Situation testing
k-NN as situation testing (algorithm)
For r ∈ P(R), look at its k closest neighbors
● ... in the protected set
○ define p1
= proportion with the same decision as r
● … in the unprotected set
○ define p2
= proportion with the same decision as r
● measure the degree of discrimination of the decision for r
○ define diff(r) = p1
- p2
(think of it as expressed in percentage points of difference)
38
knnP
(r,k)
knnU
(r,k)
p1
= 0.75
p2
= 0.25
diff(r) = p1
- p2
= 0.50
k-NN as situation testing (results)
● If decision=deny-benefit, and diff(r) ≥ t, then we found discrimination around r
39
Discrimination discovery
Definition
Data mining approaches
Discriminatory rankings
40
How to generate synthetic discriminatory rankings?
How to generate a ranking that is "unfair" towards a protected group?
Let P be the protected group, N be the non-protected group
● Rank each group P, N, independently
● While both are non-empty
○ With probability f pick top from protected, (1-f) top from non-protected
● Fill-in the bottom positions with remaining group
41Ke Yang, Julia Stoyanovich: Measuring Fairness in Ranked Outputs. FATML 2016.
Synthetic ranking generation example
42Ke Yang, Julia Stoyanovich: Measuring Fairness in Ranked Outputs. FATML 2016.
f = 0.0 f = 0.3 f = 0.5
Measuring discrimination using NDCG framework
Use logarithmic discounts per position
discount(i) = 1 / log2
(i)
Compute set-based differences at different positions
fairness = ∑ i=10,20,...
diff(P, N, TopResult1
...TopResulti
)
diff(P, N, Results) evaluates to what extent P and N
are represented in the results, e.g., proportions
43Ke Yang, Julia Stoyanovich: Measuring Fairness in Ranked Outputs. FATML 2016.
i discount(i)
5 0.43
10 0.30
20 0.23
30 0.20
See Yang and Stoyanovich 2016 for details
Fair rankings
Given a ranking policy and a fairness policy (possibly an affirmative action one),
produce a ranking that:
● Generates ranking that ensures group fairness
○ E.g.: sum of discounted utility for each group, ...
● Minimizes individual unfairness
○ E.g.: average distance from color-blind ranking, people ranked below lower scoring people, …
Work in progress!
44Work in progress with Meike Zehlike, Sara Hajian, Francesco Bonchi
Concluding remarks
45
Concluding remarks
1. Discrimination legislation attempts to create a balance
2. Algorithms can discriminate
3. Methods to avoid discrimination are not "color blind"
4. Group fairness (statistical parity) vs. individual fairness
46
Additional resources
● Presentations/keynotes/book
○ Sara Hajian, Francesco Bonchi, and Carlos Castillo: Algorithmic Bias Tutorial at KDD 2016
○ Workshop on Fairness, Accountability, and Transparency on the Web at WWW 2017
○ Suresh Venkatasubramanian: Keynote at ICWSM 2016
○ Ricardo Baeza: Keynote at WebSci 2016
○ Toon Calders: Keynote at EGC 2016
○ Discrimination and Privacy in the Information Society by Custers et al. 2013
● Groups/workshops/communities
○ Fairness, Accountability, and Transparency in Machine Learning (FATML) workshop and
resources
○ Data Transparency Lab - http://dtlconferences.org/
47

More Related Content

What's hot

Algorithmic Impact Assessment: Fairness, Robustness and Explainability in Aut...
Algorithmic Impact Assessment: Fairness, Robustness and Explainability in Aut...Algorithmic Impact Assessment: Fairness, Robustness and Explainability in Aut...
Algorithmic Impact Assessment: Fairness, Robustness and Explainability in Aut...Adriano Soares Koshiyama
 
Algorithmic Bias - What is it? Why should we care? What can we do about it?
Algorithmic Bias - What is it? Why should we care? What can we do about it? Algorithmic Bias - What is it? Why should we care? What can we do about it?
Algorithmic Bias - What is it? Why should we care? What can we do about it? University of Minnesota, Duluth
 
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...Krishnaram Kenthapadi
 
Ethical Issues in Machine Learning Algorithms. (Part 1)
Ethical Issues in Machine Learning Algorithms. (Part 1)Ethical Issues in Machine Learning Algorithms. (Part 1)
Ethical Issues in Machine Learning Algorithms. (Part 1)Vladimir Kanchev
 
Fairness, Transparency, and Privacy in AI @LinkedIn
Fairness, Transparency, and Privacy in AI @LinkedInFairness, Transparency, and Privacy in AI @LinkedIn
Fairness, Transparency, and Privacy in AI @LinkedInC4Media
 
Privacy in AI/ML Systems: Practical Challenges and Lessons Learned
Privacy in AI/ML Systems: Practical Challenges and Lessons LearnedPrivacy in AI/ML Systems: Practical Challenges and Lessons Learned
Privacy in AI/ML Systems: Practical Challenges and Lessons LearnedKrishnaram Kenthapadi
 
NYAI #24: Developing Trust in Artificial Intelligence and Machine Learning fo...
NYAI #24: Developing Trust in Artificial Intelligence and Machine Learning fo...NYAI #24: Developing Trust in Artificial Intelligence and Machine Learning fo...
NYAI #24: Developing Trust in Artificial Intelligence and Machine Learning fo...Maryam Farooq
 
Algorithmic Bias : What is it? Why should we care? What can we do about it?
Algorithmic Bias : What is it? Why should we care? What can we do about it?Algorithmic Bias : What is it? Why should we care? What can we do about it?
Algorithmic Bias : What is it? Why should we care? What can we do about it?University of Minnesota, Duluth
 
Algorithmic Bias: Challenges and Opportunities for AI in Healthcare
Algorithmic Bias:  Challenges and Opportunities for AI in HealthcareAlgorithmic Bias:  Challenges and Opportunities for AI in Healthcare
Algorithmic Bias: Challenges and Opportunities for AI in HealthcareGregory Nelson
 
How can algorithms be biased?
How can algorithms be biased?How can algorithms be biased?
How can algorithms be biased?Software Guru
 
Fairness in Machine Learning
Fairness in Machine LearningFairness in Machine Learning
Fairness in Machine LearningDelip Rao
 
Fairness and Privacy in AI/ML Systems
Fairness and Privacy in AI/ML SystemsFairness and Privacy in AI/ML Systems
Fairness and Privacy in AI/ML SystemsKrishnaram Kenthapadi
 
Avoiding Machine Learning Pitfalls 2-10-18
Avoiding Machine Learning Pitfalls 2-10-18Avoiding Machine Learning Pitfalls 2-10-18
Avoiding Machine Learning Pitfalls 2-10-18Dan Elton
 
How do we train AI to be Ethical and Unbiased?
How do we train AI to be Ethical and Unbiased?How do we train AI to be Ethical and Unbiased?
How do we train AI to be Ethical and Unbiased?Mark Borg
 
All Needle, No Haystack: Bring Predictive Analysis to Information Governance
All Needle, No Haystack: Bring Predictive Analysis to Information GovernanceAll Needle, No Haystack: Bring Predictive Analysis to Information Governance
All Needle, No Haystack: Bring Predictive Analysis to Information GovernanceAIIM International
 
Engineering Ethics: Practicing Fairness
Engineering Ethics: Practicing FairnessEngineering Ethics: Practicing Fairness
Engineering Ethics: Practicing FairnessClare Corthell
 
ICPSR - Complex Systems Models in the Social Sciences - Lecture 6 - Professor...
ICPSR - Complex Systems Models in the Social Sciences - Lecture 6 - Professor...ICPSR - Complex Systems Models in the Social Sciences - Lecture 6 - Professor...
ICPSR - Complex Systems Models in the Social Sciences - Lecture 6 - Professor...Daniel Katz
 
Crj 301 week 4 dq 2 juvenile trials
Crj 301 week 4 dq 2 juvenile trialsCrj 301 week 4 dq 2 juvenile trials
Crj 301 week 4 dq 2 juvenile trialssynchverressbi1973
 
Introduction to the ethics of machine learning
Introduction to the ethics of machine learningIntroduction to the ethics of machine learning
Introduction to the ethics of machine learningDaniel Wilson
 
Slave to the Algorithm 2016
Slave to the Algorithm  2016 Slave to the Algorithm  2016
Slave to the Algorithm 2016 Lilian Edwards
 

What's hot (20)

Algorithmic Impact Assessment: Fairness, Robustness and Explainability in Aut...
Algorithmic Impact Assessment: Fairness, Robustness and Explainability in Aut...Algorithmic Impact Assessment: Fairness, Robustness and Explainability in Aut...
Algorithmic Impact Assessment: Fairness, Robustness and Explainability in Aut...
 
Algorithmic Bias - What is it? Why should we care? What can we do about it?
Algorithmic Bias - What is it? Why should we care? What can we do about it? Algorithmic Bias - What is it? Why should we care? What can we do about it?
Algorithmic Bias - What is it? Why should we care? What can we do about it?
 
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...
 
Ethical Issues in Machine Learning Algorithms. (Part 1)
Ethical Issues in Machine Learning Algorithms. (Part 1)Ethical Issues in Machine Learning Algorithms. (Part 1)
Ethical Issues in Machine Learning Algorithms. (Part 1)
 
Fairness, Transparency, and Privacy in AI @LinkedIn
Fairness, Transparency, and Privacy in AI @LinkedInFairness, Transparency, and Privacy in AI @LinkedIn
Fairness, Transparency, and Privacy in AI @LinkedIn
 
Privacy in AI/ML Systems: Practical Challenges and Lessons Learned
Privacy in AI/ML Systems: Practical Challenges and Lessons LearnedPrivacy in AI/ML Systems: Practical Challenges and Lessons Learned
Privacy in AI/ML Systems: Practical Challenges and Lessons Learned
 
NYAI #24: Developing Trust in Artificial Intelligence and Machine Learning fo...
NYAI #24: Developing Trust in Artificial Intelligence and Machine Learning fo...NYAI #24: Developing Trust in Artificial Intelligence and Machine Learning fo...
NYAI #24: Developing Trust in Artificial Intelligence and Machine Learning fo...
 
Algorithmic Bias : What is it? Why should we care? What can we do about it?
Algorithmic Bias : What is it? Why should we care? What can we do about it?Algorithmic Bias : What is it? Why should we care? What can we do about it?
Algorithmic Bias : What is it? Why should we care? What can we do about it?
 
Algorithmic Bias: Challenges and Opportunities for AI in Healthcare
Algorithmic Bias:  Challenges and Opportunities for AI in HealthcareAlgorithmic Bias:  Challenges and Opportunities for AI in Healthcare
Algorithmic Bias: Challenges and Opportunities for AI in Healthcare
 
How can algorithms be biased?
How can algorithms be biased?How can algorithms be biased?
How can algorithms be biased?
 
Fairness in Machine Learning
Fairness in Machine LearningFairness in Machine Learning
Fairness in Machine Learning
 
Fairness and Privacy in AI/ML Systems
Fairness and Privacy in AI/ML SystemsFairness and Privacy in AI/ML Systems
Fairness and Privacy in AI/ML Systems
 
Avoiding Machine Learning Pitfalls 2-10-18
Avoiding Machine Learning Pitfalls 2-10-18Avoiding Machine Learning Pitfalls 2-10-18
Avoiding Machine Learning Pitfalls 2-10-18
 
How do we train AI to be Ethical and Unbiased?
How do we train AI to be Ethical and Unbiased?How do we train AI to be Ethical and Unbiased?
How do we train AI to be Ethical and Unbiased?
 
All Needle, No Haystack: Bring Predictive Analysis to Information Governance
All Needle, No Haystack: Bring Predictive Analysis to Information GovernanceAll Needle, No Haystack: Bring Predictive Analysis to Information Governance
All Needle, No Haystack: Bring Predictive Analysis to Information Governance
 
Engineering Ethics: Practicing Fairness
Engineering Ethics: Practicing FairnessEngineering Ethics: Practicing Fairness
Engineering Ethics: Practicing Fairness
 
ICPSR - Complex Systems Models in the Social Sciences - Lecture 6 - Professor...
ICPSR - Complex Systems Models in the Social Sciences - Lecture 6 - Professor...ICPSR - Complex Systems Models in the Social Sciences - Lecture 6 - Professor...
ICPSR - Complex Systems Models in the Social Sciences - Lecture 6 - Professor...
 
Crj 301 week 4 dq 2 juvenile trials
Crj 301 week 4 dq 2 juvenile trialsCrj 301 week 4 dq 2 juvenile trials
Crj 301 week 4 dq 2 juvenile trials
 
Introduction to the ethics of machine learning
Introduction to the ethics of machine learningIntroduction to the ethics of machine learning
Introduction to the ethics of machine learning
 
Slave to the Algorithm 2016
Slave to the Algorithm  2016 Slave to the Algorithm  2016
Slave to the Algorithm 2016
 

Viewers also liked

Viewers also liked (12)

Big Crisis Data for ISPC
Big Crisis Data for ISPCBig Crisis Data for ISPC
Big Crisis Data for ISPC
 
Social Media Mining and Retrieval
Social Media Mining and RetrievalSocial Media Mining and Retrieval
Social Media Mining and Retrieval
 
Bases de Datos - Carlos Castillo
Bases de Datos - Carlos CastilloBases de Datos - Carlos Castillo
Bases de Datos - Carlos Castillo
 
Future Directions of Fairness-Aware Data Mining: Recommendation, Causality, a...
Future Directions of Fairness-Aware Data Mining: Recommendation, Causality, a...Future Directions of Fairness-Aware Data Mining: Recommendation, Causality, a...
Future Directions of Fairness-Aware Data Mining: Recommendation, Causality, a...
 
Observational studies in social media
Observational studies in social mediaObservational studies in social media
Observational studies in social media
 
Databeers: Big Crisis Data
Databeers: Big Crisis DataDatabeers: Big Crisis Data
Databeers: Big Crisis Data
 
Keynote talk: Big Crisis Data, an Open Invitation
Keynote talk: Big Crisis Data, an Open InvitationKeynote talk: Big Crisis Data, an Open Invitation
Keynote talk: Big Crisis Data, an Open Invitation
 
Complex networks - Assortativity
Complex networks -  AssortativityComplex networks -  Assortativity
Complex networks - Assortativity
 
Crisis Computing
Crisis ComputingCrisis Computing
Crisis Computing
 
Graph Evolution Models
Graph Evolution ModelsGraph Evolution Models
Graph Evolution Models
 
K-Means Algorithm
K-Means AlgorithmK-Means Algorithm
K-Means Algorithm
 
Clustering, k means algorithm
Clustering, k means algorithmClustering, k means algorithm
Clustering, k means algorithm
 

Similar to Detecting Algorithmic Bias (keynote at DIR 2016)

Fairness-aware learning: From single models to sequential ensemble learning a...
Fairness-aware learning: From single models to sequential ensemble learning a...Fairness-aware learning: From single models to sequential ensemble learning a...
Fairness-aware learning: From single models to sequential ensemble learning a...Eirini Ntoutsi
 
Mantelero collective privacy in3_def
Mantelero collective privacy in3_defMantelero collective privacy in3_def
Mantelero collective privacy in3_defAlessandro Mantelero
 
A methodology for direct and indirect discrimination prevention in data mining
A methodology for direct and indirect  discrimination prevention in data miningA methodology for direct and indirect  discrimination prevention in data mining
A methodology for direct and indirect discrimination prevention in data miningGitesh More
 
Analyzing Bias in Data - IRE 2019
Analyzing Bias in Data - IRE 2019Analyzing Bias in Data - IRE 2019
Analyzing Bias in Data - IRE 2019Jonathan Stray
 
Frameworks for Algorithmic Bias
Frameworks for Algorithmic BiasFrameworks for Algorithmic Bias
Frameworks for Algorithmic BiasJonathan Stray
 
Protection of Direct and Indirect Discrimination using Prevention Methods
Protection of Direct and Indirect Discrimination using Prevention Methods Protection of Direct and Indirect Discrimination using Prevention Methods
Protection of Direct and Indirect Discrimination using Prevention Methods IOSR Journals
 
Bias in AI-systems: A multi-step approach
Bias in AI-systems: A multi-step approachBias in AI-systems: A multi-step approach
Bias in AI-systems: A multi-step approachEirini Ntoutsi
 
Algorithmic fairness
Algorithmic fairnessAlgorithmic fairness
Algorithmic fairnessAnthonyMelson
 
Transparency in ML and AI (humble views from a concerned academic)
Transparency in ML and AI (humble views from a concerned academic)Transparency in ML and AI (humble views from a concerned academic)
Transparency in ML and AI (humble views from a concerned academic)Paolo Missier
 
Truth, Justice, and Technicity: from Bias to the Politics of Systems
Truth, Justice, and Technicity: from Bias to the Politics of SystemsTruth, Justice, and Technicity: from Bias to the Politics of Systems
Truth, Justice, and Technicity: from Bias to the Politics of SystemsBernhard Rieder
 
Algorithmic Bias In Education
Algorithmic Bias In EducationAlgorithmic Bias In Education
Algorithmic Bias In EducationAudrey Britton
 
06 Network Study Design: Ethical Considerations and Safeguards (2016)
06 Network Study Design: Ethical Considerations and Safeguards (2016)06 Network Study Design: Ethical Considerations and Safeguards (2016)
06 Network Study Design: Ethical Considerations and Safeguards (2016)Duke Network Analysis Center
 
06 Network Study Design: Ethical Considerations and Safeguards
06 Network Study Design: Ethical Considerations and Safeguards06 Network Study Design: Ethical Considerations and Safeguards
06 Network Study Design: Ethical Considerations and Safeguardsdnac
 
Frontiers of Computational Journalism week 6 - Quantitative Fairness
Frontiers of Computational Journalism week 6 - Quantitative FairnessFrontiers of Computational Journalism week 6 - Quantitative Fairness
Frontiers of Computational Journalism week 6 - Quantitative FairnessJonathan Stray
 
C0364012016
C0364012016C0364012016
C0364012016theijes
 
Anti-bribery, digital investigation and privacy
Anti-bribery, digital investigation and privacyAnti-bribery, digital investigation and privacy
Anti-bribery, digital investigation and privacyPECB
 
Privacy-preserving Data Mining in Industry (WSDM 2019 Tutorial)
Privacy-preserving Data Mining in Industry (WSDM 2019 Tutorial)Privacy-preserving Data Mining in Industry (WSDM 2019 Tutorial)
Privacy-preserving Data Mining in Industry (WSDM 2019 Tutorial)Krishnaram Kenthapadi
 
Ramon van den Akker. Fairness of machine learning models an overview and prac...
Ramon van den Akker. Fairness of machine learning models an overview and prac...Ramon van den Akker. Fairness of machine learning models an overview and prac...
Ramon van den Akker. Fairness of machine learning models an overview and prac...Lviv Startup Club
 
Data ethics and machine learning: discrimination, algorithmic bias, and how t...
Data ethics and machine learning: discrimination, algorithmic bias, and how t...Data ethics and machine learning: discrimination, algorithmic bias, and how t...
Data ethics and machine learning: discrimination, algorithmic bias, and how t...Data Driven Innovation
 

Similar to Detecting Algorithmic Bias (keynote at DIR 2016) (20)

Fairness-aware learning: From single models to sequential ensemble learning a...
Fairness-aware learning: From single models to sequential ensemble learning a...Fairness-aware learning: From single models to sequential ensemble learning a...
Fairness-aware learning: From single models to sequential ensemble learning a...
 
Mantelero collective privacy in3_def
Mantelero collective privacy in3_defMantelero collective privacy in3_def
Mantelero collective privacy in3_def
 
A methodology for direct and indirect discrimination prevention in data mining
A methodology for direct and indirect  discrimination prevention in data miningA methodology for direct and indirect  discrimination prevention in data mining
A methodology for direct and indirect discrimination prevention in data mining
 
Analyzing Bias in Data - IRE 2019
Analyzing Bias in Data - IRE 2019Analyzing Bias in Data - IRE 2019
Analyzing Bias in Data - IRE 2019
 
Frameworks for Algorithmic Bias
Frameworks for Algorithmic BiasFrameworks for Algorithmic Bias
Frameworks for Algorithmic Bias
 
Protection of Direct and Indirect Discrimination using Prevention Methods
Protection of Direct and Indirect Discrimination using Prevention Methods Protection of Direct and Indirect Discrimination using Prevention Methods
Protection of Direct and Indirect Discrimination using Prevention Methods
 
Bias in AI-systems: A multi-step approach
Bias in AI-systems: A multi-step approachBias in AI-systems: A multi-step approach
Bias in AI-systems: A multi-step approach
 
Algorithmic fairness
Algorithmic fairnessAlgorithmic fairness
Algorithmic fairness
 
J046065457
J046065457J046065457
J046065457
 
Transparency in ML and AI (humble views from a concerned academic)
Transparency in ML and AI (humble views from a concerned academic)Transparency in ML and AI (humble views from a concerned academic)
Transparency in ML and AI (humble views from a concerned academic)
 
Truth, Justice, and Technicity: from Bias to the Politics of Systems
Truth, Justice, and Technicity: from Bias to the Politics of SystemsTruth, Justice, and Technicity: from Bias to the Politics of Systems
Truth, Justice, and Technicity: from Bias to the Politics of Systems
 
Algorithmic Bias In Education
Algorithmic Bias In EducationAlgorithmic Bias In Education
Algorithmic Bias In Education
 
06 Network Study Design: Ethical Considerations and Safeguards (2016)
06 Network Study Design: Ethical Considerations and Safeguards (2016)06 Network Study Design: Ethical Considerations and Safeguards (2016)
06 Network Study Design: Ethical Considerations and Safeguards (2016)
 
06 Network Study Design: Ethical Considerations and Safeguards
06 Network Study Design: Ethical Considerations and Safeguards06 Network Study Design: Ethical Considerations and Safeguards
06 Network Study Design: Ethical Considerations and Safeguards
 
Frontiers of Computational Journalism week 6 - Quantitative Fairness
Frontiers of Computational Journalism week 6 - Quantitative FairnessFrontiers of Computational Journalism week 6 - Quantitative Fairness
Frontiers of Computational Journalism week 6 - Quantitative Fairness
 
C0364012016
C0364012016C0364012016
C0364012016
 
Anti-bribery, digital investigation and privacy
Anti-bribery, digital investigation and privacyAnti-bribery, digital investigation and privacy
Anti-bribery, digital investigation and privacy
 
Privacy-preserving Data Mining in Industry (WSDM 2019 Tutorial)
Privacy-preserving Data Mining in Industry (WSDM 2019 Tutorial)Privacy-preserving Data Mining in Industry (WSDM 2019 Tutorial)
Privacy-preserving Data Mining in Industry (WSDM 2019 Tutorial)
 
Ramon van den Akker. Fairness of machine learning models an overview and prac...
Ramon van den Akker. Fairness of machine learning models an overview and prac...Ramon van den Akker. Fairness of machine learning models an overview and prac...
Ramon van den Akker. Fairness of machine learning models an overview and prac...
 
Data ethics and machine learning: discrimination, algorithmic bias, and how t...
Data ethics and machine learning: discrimination, algorithmic bias, and how t...Data ethics and machine learning: discrimination, algorithmic bias, and how t...
Data ethics and machine learning: discrimination, algorithmic bias, and how t...
 

More from Carlos Castillo (ChaTo)

Finding High Quality Content in Social Media
Finding High Quality Content in Social MediaFinding High Quality Content in Social Media
Finding High Quality Content in Social MediaCarlos Castillo (ChaTo)
 
Text similarity and the vector space model
Text similarity and the vector space modelText similarity and the vector space model
Text similarity and the vector space modelCarlos Castillo (ChaTo)
 
Characterizing the Life Cycle of Online News Stories Using Social Media React...
Characterizing the Life Cycle of Online News Stories Using Social Media React...Characterizing the Life Cycle of Online News Stories Using Social Media React...
Characterizing the Life Cycle of Online News Stories Using Social Media React...Carlos Castillo (ChaTo)
 
What to Expect When the Unexpected Happens: Social Media Communications Acros...
What to Expect When the Unexpected Happens: Social Media Communications Acros...What to Expect When the Unexpected Happens: Social Media Communications Acros...
What to Expect When the Unexpected Happens: Social Media Communications Acros...Carlos Castillo (ChaTo)
 

More from Carlos Castillo (ChaTo) (19)

Finding High Quality Content in Social Media
Finding High Quality Content in Social MediaFinding High Quality Content in Social Media
Finding High Quality Content in Social Media
 
When no clicks are good news
When no clicks are good newsWhen no clicks are good news
When no clicks are good news
 
Natural experiments
Natural experimentsNatural experiments
Natural experiments
 
Content-based link prediction
Content-based link predictionContent-based link prediction
Content-based link prediction
 
Link prediction
Link predictionLink prediction
Link prediction
 
Recommender Systems
Recommender SystemsRecommender Systems
Recommender Systems
 
Graph Partitioning and Spectral Methods
Graph Partitioning and Spectral MethodsGraph Partitioning and Spectral Methods
Graph Partitioning and Spectral Methods
 
Finding Dense Subgraphs
Finding Dense SubgraphsFinding Dense Subgraphs
Finding Dense Subgraphs
 
Link-Based Ranking
Link-Based RankingLink-Based Ranking
Link-Based Ranking
 
Text Indexing / Inverted Indices
Text Indexing / Inverted IndicesText Indexing / Inverted Indices
Text Indexing / Inverted Indices
 
Indexing
IndexingIndexing
Indexing
 
Text Summarization
Text SummarizationText Summarization
Text Summarization
 
Hierarchical Clustering
Hierarchical ClusteringHierarchical Clustering
Hierarchical Clustering
 
Clustering
ClusteringClustering
Clustering
 
Text similarity and the vector space model
Text similarity and the vector space modelText similarity and the vector space model
Text similarity and the vector space model
 
Intro to Creative Commons (May 2015)
Intro to Creative Commons (May 2015)Intro to Creative Commons (May 2015)
Intro to Creative Commons (May 2015)
 
Characterizing the Life Cycle of Online News Stories Using Social Media React...
Characterizing the Life Cycle of Online News Stories Using Social Media React...Characterizing the Life Cycle of Online News Stories Using Social Media React...
Characterizing the Life Cycle of Online News Stories Using Social Media React...
 
What to Expect When the Unexpected Happens: Social Media Communications Acros...
What to Expect When the Unexpected Happens: Social Media Communications Acros...What to Expect When the Unexpected Happens: Social Media Communications Acros...
What to Expect When the Unexpected Happens: Social Media Communications Acros...
 
Crisis Informatics (November 2013)
Crisis Informatics (November 2013)Crisis Informatics (November 2013)
Crisis Informatics (November 2013)
 

Recently uploaded

Artificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxArtificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxhariprasad279825
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenHervé Boutemy
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsSergiu Bodiu
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationSlibray Presentation
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .Alan Dix
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsMark Billinghurst
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfAddepto
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.Curtis Poe
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsPixlogix Infotech
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxLoriGlavin3
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteDianaGray10
 
Search Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdfSearch Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdfRankYa
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsRizwan Syed
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024Stephanie Beckett
 
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostLeverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostZilliz
 
Advanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionAdvanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionDilum Bandara
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...Fwdays
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxNavinnSomaal
 

Recently uploaded (20)

Artificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxArtificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptx
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache Maven
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platforms
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck Presentation
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR Systems
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdf
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and Cons
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test Suite
 
Search Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdfSearch Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdf
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL Certs
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024
 
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostLeverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
 
Advanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionAdvanced Computer Architecture – An Introduction
Advanced Computer Architecture – An Introduction
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptx
 

Detecting Algorithmic Bias (keynote at DIR 2016)

  • 1. Detecting Algorithmic Discrimination 15th Dutch-Belgian Information Retrieval Workshop, November 2016 Carlos Castillo @chatox Based on joint KDD 2016 tutorial with Sara Hajian and Francesco Bonchi 1
  • 2. Part I Part I Algorithmic Discrimination Concepts Part II Algorithmic Discrimination Discovery 2
  • 3. Part I: algorithmic discrimination concepts Motivation and examples of algorithmic bias Sources of algorithmic bias Legal definitions and principles of discrimination Measures of discrimination Discrimination and privacy 3
  • 4. A dangerous reasoning To discriminate is to treat someone differently (Unfair) discrimination is based on group membership, not individual merit People's decisions include objective and subjective elements Hence, they can be discriminate Algorithmic inputs include only objective elements Hence, they cannot discriminate? 4
  • 5. Judiciary use of COMPAS scores COMPAS (Correctional Offender Management Profiling for Alternative Sanctions): 137-questions questionnaire and predictive model for "risk of recidivism" Prediction accuracy of recidivism for blacks and whites is about 60%, but ... ● Blacks that did not reoffend were classified as high risk twice as much as whites that did not reoffend ● Whites who did reoffend were classified as low risk twice as much as blacks who did reoffend 5 Pro Publica, May 2016. https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm
  • 6. On the web: race and gender stereotypes reinforced ● Results for "CEO" in Google Images: 11% female, US 27% female CEOs ○ Also in Google Images, "doctors" are mostly male, "nurses" are mostly female ● Google search results for professional vs. unprofessional hairstyles for work Image results: "Unprofessional hair for work" Image results: "Professional hair for work" 6M. Kay, C. Matuszek, S. Munson (2015): Unequal Representation and Gender Stereotypes in Image Search Results for Occupations. CHI'15.
  • 7. The dark side of Google Ads AdFisher: tool to automate the creation of behavioral and demographic profiles. Used to demonstrate that setting gender = female results in less ads for high-paying jobs. A. Datta, M. C. Tschantz, and A. Datta (2015). Automated experiments on ad privacy settings. Proceedings on Privacy Enhancing Technologies, 2015(1):92–112. 7
  • 8. Self-perpetuating algorithmic biases Ranking algorithm for finding candidates puts Maria at the bottom of the list Hence, Maria gets less jobs offers Hence, Maria gets less job experience Hence, Maria becomes less employable The same spiral happens in credit scoring, and in stop-and-frisk of minorities that increases incarceration rates 8
  • 9. To make things worse ... Algorithms are "black boxes" protected by Industrial secrecy Legal protections Intentional obfuscation Discrimination becomes invisible Mitigation becomes impossible 9F. Pasquale (2015): The Black Box Society. Harvard University Press.
  • 10. Weapons of Math Destruction (WMDs) "[We] treat mathematical models as a neutral and inevitable force, like the weather or the tides, we abdicate our responsibility" - Cathy O'Neil Three main ingredients of a "WMD": ● Opacity ● Scale ● Damage 10C. O'Neil (2016): Weapons of Math Destruction. Crown Random House
  • 11. Part I: algorithmic discrimination concepts Motivation and examples of algorithmic bias Sources of algorithmic bias Legal definitions and principles of discrimination Measures of discrimination Discrimination and privacy 11
  • 12. Two areas of concern: data and algorithms Data inputs: ● Poorly selected (e.g., observe only car trips, not bicycle trips) ● Incomplete, incorrect, or outdated ● Selected with bias (e.g., smartphone users) ● Perpetuating and promoting historical biases (e.g., hiring people that "fit the culture") Algorithmic processing: ● Poorly designed matching systems ● Personalization and recommendation services that narrow instead of expand user options ● Decision making systems that assume correlation implies causation ● Algorithms that do not compensate for datasets that disproportionately represent populations ● Output models that are hard to understand or explain hinder detection and mitigation of bias 12Executive Office of the US President (May 2016): "Big Data: A Report on Algorithmic Systems,Opportunity,and Civil Rights"
  • 13. Detecting algorithmic discrimination Motivation and examples of algorithmic bias Sources of algorithmic bias Legal definitions and principles of discrimination Measures of discrimination Discrimination and privacy 13
  • 14. Legal concepts Anti-discrimination legislation typically seeks equal access to employment, working conditions, education, social protection, goods, and services Anti-discrimination legislation is very diverse and includes many legal concepts: Genuine occupational requirement (male actor to portray male character) Disparate impact and disparate treatment Burden of proof and situation testing Group under-representation principle 14
  • 15. Discrimination: treatment vs impact Modern legal frameworks offer various levels of protection for being discriminated by belonging to a particular class of: gender, age, ethnicity, nationality, disability, religious beliefs, and/or sexual orientation Disparate treatment: Treatment depends on class membership Disparate impact: Outcome depends on class membership Even if (apparently?) people are treated the same way 15
  • 16. Proving discrimination is hard The burden of proof can be shared for discrimination cases: the accuser must produce evidence of the consequences, the defendant must produce evidence that the process was fair (This is the criterion of the European Court of Justice) 16Migration Policy Group and Swedish Centre for Equal Rights (2009): Proving discrimination cases - the role of situational testing.
  • 17. Part I: algorithmic discrimination concepts Motivation and examples of algorithmic bias Sources of algorithmic bias Legal definitions and principles of discrimination Measures of discrimination Discrimination and privacy 17
  • 18. Principles for quantifying discrimination Two basic frameworks for measuring discrimination: Discrimination at the individual level: consistency or individual fairness Discrimination at the group level: statistical parity 18 I. Žliobaitė (2015): A survey on measuring indirect discrimination in machine learning. arXiv pre-print.
  • 19. Individual fairness is about consistency Consistency score C = 1 - ∑i ∑yj∊knn(yi) |yi - yj | Where knn(yi ) = k nearest neighbors of yi A consistent or individually fair algorithm is one in which similar people experience similar outcomes … but note that perhaps they are all treated equally badly 19 Richard S. Zemel, Yu Wu, Kevin Swersky, Toniann Pitassi, and Cynthia Dwork. 2013. Learning Fair Representations. In Proc. of the 30th Int. Conf. on Machine Learning. 325–333.
  • 20. Group fairness is about statistical parity Example: "Protected group" ~ "people with disabilities" "Benefit granted" ~ "getting a scholarship" 20 Intuitively, if a/n1 , the fraction of people with disabilities that does not get a scholarship is much larger than c/n1 , the fraction of people without disabilities that does not get a scholarship, then people with disabilities could claim they are being discriminated. D. Pedreschi, S. Ruggieri, F. Turini: A Study of Top-K Measures for Discrimination Discovery. SAC 2012.
  • 21. Simple discrimination measures These measures compare the protected group against the unprotected group: ● Risk difference = RD = p1 - p2 ● Risk ratio or relative risk = RR = p1 / p2 ● Relative chance = RC = (1-p1 ) / (1-p2 ) ● Odds ratio = RR/RC 21 D. Pedreschi, S. Ruggieri, F. Turini: A Study of Top-K Measures for Discrimination Discovery. SAC 2012.
  • 22. Simple discrimination measures These measures compare the protected group against the unprotected group: ● Risk difference = RD = p1 - p2 ● Risk ratio or relative risk = RR = p1 / p2 ● Relative chance = RC = (1-p1 ) / (1-p2 ) ● Odds ratio = RR/RC 22 D. Pedreschi, S. Ruggieri, F. Turini: A Study of Top-K Measures for Discrimination Discovery. SAC 2012. Mentioned in UK law Mentioned by EU Court of Justice US courts focus on selection rates: (1-p1 ) and (1-p2 ) There are many other measures of discrimination!
  • 23. Part I: algorithmic discrimination concepts Motivation and examples of algorithmic bias Sources of algorithmic bias Legal definitions and principles of discrimination Measures of discrimination Discrimination and privacy 23
  • 24. A connection between privacy and discrimination Finding if people having attribute X were discriminated is like inferring attribute X from a database in which: the attribute X was removed a new attribute (the decision), which is based on X, was added This is similar to trying to reconstruct a column from a privacy-scrubbed dataset 24
  • 25. Part II Part I Algorithmic Discrimination Concepts Part II Algorithmic Discrimination Discovery 25
  • 26. Discrimination discovery Definition Data mining approaches Discriminatory rankings 26
  • 27. The discrimination discovery task at a glance Given a large database of historical decision records, find discriminatory situations and practices. 27S. Ruggieri, D. Pedreschi and F. Turini (2010). DCUBE: Discrimination discovery in databases. In SIGMOD, pp. 1127-1130.
  • 28. Discrimination discovery Definition Data mining approaches Discriminatory rankings 28
  • 29. Discrimination discovery Definition Data mining approaches Discriminatory rankings 29 Classification rule mining Group discr. k-NN classification Individual discr. Bayesian networks Individual discr. Probabilistic causation Ind./Group discr. Privacy attack strategies Group discr. Predictability approach Group discr.
  • 30. Discrimination discovery Definition Data mining approaches Discriminatory rankings 30 Classification rule mining Group discr. k-NN classification Individual discr. Bayesian networks Individual discr. Probabilistic causation Ind./Group discr. Privacy attack strategies Group discr. Predictability approach Group discr. See KDD 2016 tutorial by S. Hajian, F. Bonchi, C. Castillo
  • 31. Discrimination discovery Definition Data mining approaches Discriminatory rankings 31 Classification rule mining k-NN classification D. Pedreschi, S. Ruggieri and F. Turini (2008). Discrimination-aware data mining. In KDD'08. D. Pedreschi, S. Ruggieri, and F. Turini (2009). Measuring discrimination in socially-sensitive decision records. In SDM'09. S. Ruggieri, D. Pedreschi, and F. Turini (2010). Data mining for discrimination discovery. In TKDD 4(2).
  • 32. Defining potentially discriminated (PD) groups A subset of attribute values are perceived as potentially discriminatory based on background knowledge. Potentially discriminated groups are people with those attribute values. Examples: Female gender Ethnic minority (racism) or minority language Specific age range (ageism) Specific sexual orientation (homophobia) 32
  • 33. Direct discrimination Direct discrimination implies rules or procedures that impose ‘disproportionate burdens’ on minorities PD rules are any classification rule of the form: A, B → C where A is a PD group (B is called a "context") 33 Example: gender="female", saving_status="no known savings" → credit=no
  • 34. Indirect discrimination Indirect discrimination implies rules or procedures that impose ‘disproportionate burdens’ on minorities, though not explicitly using discriminatory attributes Potentially non-discriminatory (PND) rules may unveil discrimination, and are of the form: D, B → C where D is a PND group 34 Example: neighborhood="10451", city="NYC" → credit=no
  • 35. Indirect discrimination example Suppose we know that with high confidence: (a) neighborhood=10451, city=NYC → benefit=deny But we also know that with high confidence: (b) neighborhood=10451, city=NYC → race=black Hence: (c) race=black, neighborhood=10451, city=NYC → benefit=deny Rule (b) is background knowledge that allows us to infer (c), which shows that rule (a) is indirectly discriminating against blacks 35
  • 36. Discrimination discovery Definition Data mining approaches Discriminatory rankings 36 B. T. Luong, S. Ruggieri, and F. Turini (2011). k-NN as an implementation of situation testing for discrimination discovery and prevention. KDD'11 Classification rule mining k-NN classification
  • 37. ● Legal approach for creating controlled experiments ● Matched pairs undergo the same situation, e.g. apply for a job ○ Same characteristics apart from the discrimination ground 37 Situation testing
  • 38. k-NN as situation testing (algorithm) For r ∈ P(R), look at its k closest neighbors ● ... in the protected set ○ define p1 = proportion with the same decision as r ● … in the unprotected set ○ define p2 = proportion with the same decision as r ● measure the degree of discrimination of the decision for r ○ define diff(r) = p1 - p2 (think of it as expressed in percentage points of difference) 38 knnP (r,k) knnU (r,k) p1 = 0.75 p2 = 0.25 diff(r) = p1 - p2 = 0.50
  • 39. k-NN as situation testing (results) ● If decision=deny-benefit, and diff(r) ≥ t, then we found discrimination around r 39
  • 40. Discrimination discovery Definition Data mining approaches Discriminatory rankings 40
  • 41. How to generate synthetic discriminatory rankings? How to generate a ranking that is "unfair" towards a protected group? Let P be the protected group, N be the non-protected group ● Rank each group P, N, independently ● While both are non-empty ○ With probability f pick top from protected, (1-f) top from non-protected ● Fill-in the bottom positions with remaining group 41Ke Yang, Julia Stoyanovich: Measuring Fairness in Ranked Outputs. FATML 2016.
  • 42. Synthetic ranking generation example 42Ke Yang, Julia Stoyanovich: Measuring Fairness in Ranked Outputs. FATML 2016. f = 0.0 f = 0.3 f = 0.5
  • 43. Measuring discrimination using NDCG framework Use logarithmic discounts per position discount(i) = 1 / log2 (i) Compute set-based differences at different positions fairness = ∑ i=10,20,... diff(P, N, TopResult1 ...TopResulti ) diff(P, N, Results) evaluates to what extent P and N are represented in the results, e.g., proportions 43Ke Yang, Julia Stoyanovich: Measuring Fairness in Ranked Outputs. FATML 2016. i discount(i) 5 0.43 10 0.30 20 0.23 30 0.20 See Yang and Stoyanovich 2016 for details
  • 44. Fair rankings Given a ranking policy and a fairness policy (possibly an affirmative action one), produce a ranking that: ● Generates ranking that ensures group fairness ○ E.g.: sum of discounted utility for each group, ... ● Minimizes individual unfairness ○ E.g.: average distance from color-blind ranking, people ranked below lower scoring people, … Work in progress! 44Work in progress with Meike Zehlike, Sara Hajian, Francesco Bonchi
  • 46. Concluding remarks 1. Discrimination legislation attempts to create a balance 2. Algorithms can discriminate 3. Methods to avoid discrimination are not "color blind" 4. Group fairness (statistical parity) vs. individual fairness 46
  • 47. Additional resources ● Presentations/keynotes/book ○ Sara Hajian, Francesco Bonchi, and Carlos Castillo: Algorithmic Bias Tutorial at KDD 2016 ○ Workshop on Fairness, Accountability, and Transparency on the Web at WWW 2017 ○ Suresh Venkatasubramanian: Keynote at ICWSM 2016 ○ Ricardo Baeza: Keynote at WebSci 2016 ○ Toon Calders: Keynote at EGC 2016 ○ Discrimination and Privacy in the Information Society by Custers et al. 2013 ● Groups/workshops/communities ○ Fairness, Accountability, and Transparency in Machine Learning (FATML) workshop and resources ○ Data Transparency Lab - http://dtlconferences.org/ 47