SlideShare a Scribd company logo
1 of 47
Download to read offline
NL4XAI Workshop, co-located with INLG2020 (online), 18.12.2020
Bias in AI-systems:
A multi-step approach
Eirini Ntoutsi
Leibniz University Hannover & L3S Research Center
ITN Network NoBIAS- Artificial Intelligence without BIAS
Outline
 Introduction
 Dealing with bias in data-driven AI systems
 Understanding bias
 Mitigating bias
 Accounting for bias
 Wrapping up
2Eirini Ntoutsi Bias in AI-systems: A multi-step approach
Successful applications
3Eirini Ntoutsi Bias in AI-systems: A multi-step approach
Recommendations Navigation
Severe weather alerts
Automation
Questionable uses/ failures
4Eirini Ntoutsi Bias in AI-systems: A multi-step approach
Google flu trends failure Microsoft’s bot Tay taken offline
after racist tweets
IBM’s Watson for Oncology
cancelled
Facial recognition works better
for white males
Why AI-projects might fail?
 Back to basics: How machines learn
 Machine Learning gives computers the ability to learn without being
explicitly programmed (Arthur Samuel, 1959)
 We don’t codify the solution, we don’t even know it!
 DATA & the learning algorithms are the keys
5Eirini Ntoutsi Bias in AI-systems: A multi-step approach
Algorithms
Models
Models
Data
Watch out for (hidden) assumptions
 Assumptions include stationarity, independent & identically distributed
data, ...
 In this talk, I will focus on the assumption/myth of algorithmic objectivity
1. The common misconception that humans are subjective, but data and
algorithms not and therefore they cannot discriminate.
6Eirini Ntoutsi Bias in AI-systems: A multi-step approach
Reality check: Can algorithms discriminate?
 Bloomberg analysts compared Amazon same-day delivery areas with U.S.
Census Bureau data
 They found that in 6 major same-day delivery cities, the service area
excludes predominantly black ZIP codes to varying degrees.
 Shouldn’t this service be based on customer’s spend rather than race?
 Amazon claimed that race was not used in their models.
7
Source: https://www.bloomberg.com/graphics/2016-amazon-same-day/
Eirini Ntoutsi Bias in AI-systems: A multi-step approach
Reality check cont’: Can algorithms discriminate?
 There have been already plenty of cases of algorithmic discrimination
 State of the art visions systems (used e.g. in autonomous driving) recognize
better white males than black women (racial and gender bias)
 Google’s AdFisher tool for serving personalized ads was found to serve
significantly fewer ads for high paid jobs to women than men (gender-bias)
 COMPAS tool (US) for predicting a defendant’s risk of committing another
crime predicted higher risks of recidivism for black defendants (and lower for
white defendants) than their actual risk (racial-bias)
8Eirini Ntoutsi Bias in AI-systems: A multi-step approach
Dont blame (only) the AI
 “Bias is as old as human civilization” and “it is human nature for members
of the dominant majority to be oblivious to the experiences of other
groups”
 Human bias: a prejudice in favour of or against one thing, person, or group
compared with another usually in a way that’s considered to be unfair.
 Bias triggers (protected attributes): ethnicity, race, age, gender, religion, sexual
orientation …
 Algorithmic bias: the inclination or prejudice of a decision made by an AI
system which is for or against one person or group, especially in a way
considered to be unfair.
9Eirini Ntoutsi Bias in AI-systems: A multi-step approach
Bias, an overloaded term
 Inductive bias ”refers to a set of (explicit or implicit) assumptions made by
a learning algorithm in order to perform induction, that is, to generalize a
finite set of observation (training data) into a general model of the
domain. Without a bias of that kind, induction would not be possible,
since the observations can normally be generalized in many ways.”
(Hüllermeier, Fober & Mernberger, 2013)
 Bias-free learning is futile: A learner that makes no a priori assumptions
regarding the identity of the target concept has no rational basis for
classifying any unseen instances.
 Some biases are positive and helpful, e.g., making healthy eating choices
 We refer here to bias that might cause discrimination and unfair actions
to an individual or group on the basis of protected attributes like race or
gender
10Eirini Ntoutsi Bias in AI-systems: A multi-step approach
Outline
 Introduction
 Dealing with bias in data-driven AI systems
 Understanding bias
 Mitigating bias
 Accounting for bias
 Wrapping up
11Eirini Ntoutsi Bias in AI-systems: A multi-step approach
Dealing with bias in data-driven AI systems
 Fairness-aware machine learning: a young, fast evolving, multi-disciplinary
research field
 Bias/fairness/discrimination/… have been studied for long in philosophy, social
sciences, law, …
 Existing approaches can be divided into three categories
 Understanding bias
 How bias is created in the society and enters our sociotechnical systems, is
manifested in the data used by AI algorithms, and can be formalized.
 Mitigating bias
 Approaches that tackle bias in different stages of AI-decision making.
 Accounting for bias
 Approaches that account for bias proactively or retroactively.
12Eirini Ntoutsi Bias in AI-systems: A multi-step approach
Outline
 Introduction
 Dealing with bias in data-driven AI systems
 Understanding bias
 Mitigating bias
 Accounting for bias
 Wrapping up
13Eirini Ntoutsi Bias in AI-systems: A multi-step approach
Understanding bias: Sociotechnical causes of bias
 AI-systems rely on data generated by humans (UGC) or collected via
systems created by humans.
 As a result human biases
 enter AI systems
 E.g., bias in word-embeddings (Bolukbasi et al, 2016)
 might be amplified by complex sociotechnical systems such as the Web and,
 new types of biases might be created
14Eirini Ntoutsi Bias in AI-systems: A multi-step approach
Understanding bias: How is bias manifested in data?
 Protected attributes and proxies
 E.g., neighborhoods in U.S. cities are highly correlated with race
 Representativeness of data
 E.g., underrepresentation of women and people of color in IT developer
communities and image datasets
 E.g., overrepresentation of black people in drug-related arrests
 Depends on data modalities
15Eirini Ntoutsi Bias in AI-systems: A multi-step approach
https://incitrio.com/top-3-lessons-learned-from-
the-top-12-marketing-campaigns-ever/
https://ellengau.medium.com/emily-in-
paris-asian-women-i-know-arent-like-
mindy-chen-6228e63da333
Typical setup for fairness-aware learning
 D = training dataset drawn from a joint distribution P(F,S,y)
 S = (binary, single) protected attribute
 s (s ̄): protected (non-protected) group
 F = set of non-protected attributes
 y = target (binary) class (+ for accepted, - for rejected)
 Goal: Learn a mapping from f(): f(F, S) = ỹ → y that,
 achieves good predictive performance
 eliminates discrimination
16Eirini Ntoutsi Bias in AI-systems: A multi-step approach
F1 F2 S y
User1 f11 f12 s +
User2 f21 -
User3 f31 f23 s +
… … … … …
Usern fn1 +
Understanding bias: How is fairness defined?
 Types of fairness measures: group fairness, individual fairness
 Group fairness: protected and non-protected groups should be treated similarly
 Representative measures: statistical parity, equal opportunity, equalized
odds
 Main critic: when focusing on the group less qualified members may be
chosen
 Individual fairness: similar individuals should be treated similarly
 Representative measures: counterfactual fairness
 Main critic: it is hard to evaluate proximity of instances (M. Kim et al, NIPS
2018)
17Eirini Ntoutsi Bias in AI-systems: A multi-step approach
Understanding bias: How is fairness defined?
 A variety of measures based on
 The prediction of the classifiers
 focus on the predicted outcome for different groups
 The training set and the prediction of the classifiers
 not only consider the predicted outcome, but also compare it to the actual
outcome
 The training set and the predicted probabilities of the classifiers
 consider the actual outcome and the predicted probability score
18Eirini Ntoutsi Bias in AI-systems: A multi-step approach
Understanding bias: How is fairness defined?
 Statistical parity: If subjects in both protected and unprotected groups
should have equal probability of being assigned to the positive class
(Dwork et al, 2012)
 Equalized Odds: There should be no difference in model’s prediction
errors between protected and non-protected groups for both classes
(Hardt et al., NIPS 16):
19Eirini Ntoutsi Bias in AI-systems: A multi-step approach
P(Ӯ = +|S = s) = P(Ӯ = +|S = s ̄)
Outline
 Introduction
 Dealing with bias in data-driven AI systems
 Understanding bias
 Mitigating bias
 Accounting for bias
 Wrapping up
20Eirini Ntoutsi Bias in AI-systems: A multi-step approach
Mitigating bias
 Goal: tackling bias in different stages of AI-decision making
21Eirini Ntoutsi Bias in AI-systems: A multi-step approach
Algorithms
Models
Models
Data
Applications
Hiring
Banking
Healthcare
Education
Autonomous
driving
…
Pre-processing
approaches
In-processing
approaches
Post-processing
approaches
Mitigating bias: pre-processing approaches
 Intuition: making the data more fair will result in a less unfair model
 Idea: balance the protected and non-protected groups in the dataset
 Design principle: minimal data interventions (to retain data utility for the
learning task)
 Different techniques:
 Instance class modification (massaging), (Kamiran & Calders, 2009),(Luong,
Ruggieri, & Turini, 2011)
 Instance selection (sampling), (Kamiran & Calders, 2010) (Kamiran & Calders,
2012)
 Instance weighting, (Calders, Kamiran, & Pechenizkiy, 2009)
 Synthetic instance generation (Iosifidis & Ntoutsi, 2018)
 …
22Eirini Ntoutsi Bias in AI-systems: A multi-step approach
Mitigating bias: pre-processing approaches: Massaging
 Change the class label of carefully selected instances (Kamiran & Calders, 2009).
 The selection is based on a ranker which ranks the individuals by their probability to
receive the favourable outcome.
 The number of massaged instances depends on the fairness measure (group fairness)
23Eirini Ntoutsi Bias in AI-systems: A multi-step approach
Image credit Vasileios Iosifidis
Mitigating bias: pre-processing approaches: discussion
 Most of the techniques are heuristics and the impact of the interventions
is not well controlled
 Approaches also exist that change the data towards fairness while
controlling the per-instance distortion and by preserving data utility,
(Calmon et al, 2017).
24Eirini Ntoutsi Bias in AI-systems: A multi-step approach
Mitigating bias
 Goal: tackling bias in different stages of AI-decision making
25Eirini Ntoutsi Bias in AI-systems: A multi-step approach
Algorithms
Models
Models
Data
Applications
Hiring
Banking
Healthcare
Education
Autonomous
driving
…
Pre-processing
approaches
In-processing
approaches
Post-processing
approaches
Mitigating bias: in-processing approaches
 Intuition: working directly with the algorithm allows for better control
 Idea: explicitly incorporate the model’s discrimination behavior in the
objective function
 Design principle: “balancing” predictive- and fairness-performance
 Different techniques:
 Regularization (Kamiran, Calders & Pechenizkiy, 2010),(Kamishima, Akaho,
Asoh & Sakuma, 2012), (Dwork, Hardt, Pitassi, Reingold & Zemel, 2012) (Zhang
& Ntoutsi, 2019)
 Constraints (Zafar, Valera, Gomez-Rodriguez & Gummadi, 2017)
 training on latent target labels (Krasanakis, Xioufis, Papadopoulos &
Kompatsiaris, 2018)
 In-training altering of data distribution (Iosifidis & Ntoutsi, 2019)
 …
26Eirini Ntoutsi Bias in AI-systems: A multi-step approach
Case: Cumulative fairness adaptive boosting
 We assume a training dataset D drawn from a joint distribution P(F,S,y)
 Let S ϵ {s, ҧ𝑠} be a binary protected attribute, where s is the protected group.
 F is the set of non-protected attributes.
 We assume a binary class: y ϵ {+,-}, + for accepted, - for rejected respectively.
27
F1 F2 S y
User1 f11 f12 s +
User2 f21 ҧ𝑠 -
User3 f31 f23 s +
… … … … …
Usern fn1 ҧ𝑠 +
V. Iosifidis, E. Ntoutsi, “AdaFair: Cumulative Fairness Adaptive Boosting", ACM CIKM 2019.
Eirini Ntoutsi Bias in AI-systems: A multi-step approach
Fairness problem formulation
 Our fairness measure is Equalized Odds which measures the difference in
model’s prediction errors between protected and non-protected groups
for both classes:
 Smaller values are better (ideally Eq.Odds = 0)
28
[Iosifidis and Ntoutsi, CIKM 2019]
Eirini Ntoutsi Bias in AI-systems: A multi-step approach
Limitations of related work
 Existing works evaluate predictive performance in terms of the overall
classification error rate (ER), e.g., [Calders et al’09, Calmon et al’17, Fish et
al’16, Hardt et al’16, Krasanakis et al’18, Zafar et al’17]
 In case of class-imbalance, ER is misleading
 Most of the datasets however suffer from imbalance
29
[Iosifidis and Ntoutsi, CIKM 2019]
Eirini Ntoutsi Bias in AI-systems: A multi-step approach
AdaFair
 Our base model is AdaBoost (Freund and Schapire, 1995), a sequential
ensemble method that in each round, re-weights the training data to
focus on misclassified instances.
 We tailor AdaBoost to fairness
 We introduce the notion of cumulative fairness that assesses the fairness of
the model up to the current boosting round.
 We directly incorporate fairness in the instance weighting process
(traditionally focusing on classification performance).
 We optimize the number of weak learners in the final ensemble based on
balanced error rate thus directly considering class imbalance in the best model
selection.
30
[Iosifidis and Ntoutsi, CIKM 2019]
Round 1: Weak learner h1 Round 2: Weak learner h2 Round 3: Weak learner h3 Final strong learner H()
Eirini Ntoutsi Bias in AI-systems: A multi-step approach
AdaFair: Cumulative boosting fairness
 Let j: 1−T be the current boosting round, T is user defined
 Let be the partial ensemble, up to current round j.
 The cumulative fairness of the ensemble up to round j, is defined based
on the parity in the predictions of weak learners h1() … hj() between
protected and non-protected groups
 ``Forcing’’ the model to consider ``historical’’ fairness over all previous
rounds instead of just focusing on current round hj() results in better
classifier performance and model convergence.
31
[Iosifidis and Ntoutsi, CIKM 2019]
Eirini Ntoutsi Bias in AI-systems: A multi-step approach
AdaFair: fairness-aware weighting of instances
 Vanilla AdaBoost already boosts misclassified instances for the next round
 Our weighting explicitly targets fairness by extra boosting discriminated
groups for the next round
 The data distribution at boosting round j+1 is updated as follows
 The fairness-related cost ui of instances xi ϵ D which belong to a group
that is discriminated is defined as follows:
32
[Iosifidis and Ntoutsi, CIKM 2019]
Eirini Ntoutsi Bias in AI-systems: A multi-step approach
Experiments: Predictive and fairness performance
 Adult census income (ratio 1+:3-)  Bank dataset (ratio 1+:8-)
34
Larger values are better, for Eq.Odds lower values are better
 Our method achieves high balanced accuracy and low discrimination (Eq.Odds) while
maintaining high TPRs and TNRs for both groups.
 The methods of Zafar et al and Krasanakis et al, eliminate discrimination by rejecting more
positive instances (lowering TPRs).
[Iosifidis and Ntoutsi, CIKM 2019]
Eirini Ntoutsi Bias in AI-systems: A multi-step approach
Cumulative vs non-cumulative fairness
 Cumulative vs non-cumulative fairness impact on model performance
 Cumulative notion of fairness performs better
 The cumulative model (AdaFair) is more stable than its non-cumulative
counterpart (standard deviation is higher)
35
[Iosifidis and Ntoutsi, CIKM 2019]
Eirini Ntoutsi Bias in AI-systems: A multi-step approach
Case: Adaptive Fairness-aware Decision Tree
 From batch fairness solutions to online fairness
solutions
 Our Fairness-Aware Hoeffding Tree (FAHT) extends
the Hoeffding tree (HT) classifier for fairness by
directly considering fairness in the splitting criterion
 HT uses the Hoeffding bound to decide on when and
how to split
 Such decisions are based on information gain to optimize
predictive performance and do not consider fairness.
36
W. Zhang, E. Ntoutsi, “An Adaptive Fairness-aware Decision Tree Classifier", IJCAI 2019.
Eirini Ntoutsi Bias in AI-systems: A multi-step approach
Skipped due to time
Case: Adaptive Fairness-aware Decision Tree
 We introduce the fairness gain of an attribute (FG)
 Disc(D) corresponds to group fairness (statistical parity)
 We introduce the joint criterion, fair information gain (FIG) that evaluates
the suitability of a candidate splitting attribute A in terms of both
predictive performance and fairness.
37
W. Zhang, E. Ntoutsi, “An Adaptive Fairness-aware Decision Tree Classifier", IJCAI 2019.
Eirini Ntoutsi Bias in AI-systems: A multi-step approach
[Zhang and Ntoutsi, IJCAI 2019]
Skipped due to time
Experiments: Predictive and fairness performance
 FAHT is capable of diminishing the discrimination to a lower level while
maintaining a fairly comparable accuracy.
 FAHT results in a shorter tree comparing to HT, as its splitting criterion FIG is more
restrictive comparing to IG.
38Eirini Ntoutsi Bias in AI-systems: A multi-step approach
[Zhang and Ntoutsi, IJCAI 2019]
Adult dataset
Qualitative results:
• HT selects “capital-gain” as the root
attribute, FAHT selects “Age”
• Capital gain is directly related with the
annual salary (class) but probably also
mirrors intrinsic discrimination of the
data
Skipped due to time
Mitigating bias
 Goal: tackling bias in different stages of AI-decision making
39Eirini Ntoutsi Bias in AI-systems: A multi-step approach
Algorithms
Models
Models
Data
Applications
Hiring
Banking
Healthcare
Education
Autonomous
driving
…
Pre-processing
approaches
In-processing
approaches
Post-processing
approaches
Mitigating bias: post-processing approaches
 Intuition: start with predictive performance
 Idea: first optimize the model for predictive performance and then tune
for fairness
 Design principle: minimal interventions (to retain model predictive
performance)
 Different techniques:
 Correct the confidence scores (Pedreschi, Ruggieri, & Turini, 2009), (Calders &
Verwer, 2010)
 Correct the class labels (Kamiran et al., 2010)
 Change the decision boundary (Kamiran, Mansha, Karim, & Zhang, 2018), (Hardt,
Price, & Srebro, 2016)
 Wrap a fair classifier on top of a black-box learner (Agarwal, Beygelzimer, Dudík,
Langford, & Wallach, 2018)
 …
40Eirini Ntoutsi Bias in AI-systems: A multi-step approach
Mitigating bias: pοst-processing approaches: shift the
decision boundary
 An example of decision boundary shift
 In this example, we also consider concept drifts (non-stationary data).
41Eirini Ntoutsi Bias in AI-systems: A multi-step approach
V. Iosifidis, H.T. Thi Ngoc, E. Ntoutsi, “Fairness-enhancing interventions in stream classification", DEXA 2019.
Outline
 Introduction
 Dealing with bias in data-driven AI systems
 Understanding bias
 Mitigating bias
 Accounting for bias
 Wrapping up
42Eirini Ntoutsi Bias in AI-systems: A multi-step approach
Accounting for bias
 Algorithmic accountability refers to the assignment of responsibility for
how an algorithm is created and its impact on society (Kaplan et al, 2019).
 Many facets of accountability for AI-driven algorithms and different
approaches
 Proactive approaches:
 Bias-aware data collection, e.g., for Web data, crowd-sourcing
 Bias-description and modeling, e.g., via ontologies
 ...
 Retroactive approaches:
 Explaining AI decisions in order to understand whether decisions are biased
 What is an explanation? Explanations w.r.t. legal/ethical grounds?
 Using explanations for fairness-aware corrections (inspired by Schramowski et al, 2020)
43Eirini Ntoutsi Bias in AI-systems: A multi-step approach
A visual overview of topics related to fairness
44Eirini Ntoutsi Bias in AI-systems: A multi-step approach
E. Ntoutsi, P. Fafalios, U. Gadiraju, V. Iosifidis, W. Nejdl, M.-E. Vidal, S. Ruggieri, F. Turini, S. Papadopoulos, E. Krasanakis,
I. Kompatsiaris, K. Kinder-Kurlanda, C. Wagner, F. Karimi, M. Fernandez, H. Alani, B. Berendt, T. Kruegel, C. Heinze, K.
Broelemann, G. Kasneci, T. Tiropanis, S. Staab"Bias in data-driven artificial intelligence systems—An introductory survey",
WIREs Data Mining and Knowledge Discovery, 2020.
Outline
 Introduction
 Dealing with bias in data-driven AI systems
 Understanding bias
 Mitigating bias
 Accounting for bias
 Wrapping up
45Eirini Ntoutsi Bias in AI-systems: A multi-step approach
Wrapping-up
 In this talk I focused on the myth of algorithmic objectivity and
 the reality of algorithmic bias and discrimination and how algorithms can pick biases
existing in the input data and further reinforce them
 A large body of research already exists
 focusing mainly on fully-supervised learning, single-protected (binary) attributes, binary
classes
 NoBIAS goal: Develop novel technical methods for AI-based decision making
without bias by taking into account ethical and legal considerations in the design
of technical solutions
 It is important to target bias in each step and collectively, in the whole (machine)
learning pipeline
46Eirini Ntoutsi Bias in AI-systems: A multi-step approach
Many open challenges and opportunities
 Fairness beyond fully-supervised learning
 Unsupervised, semi-supervised, RL, …
 Fairness beyond single protected attributes  multi-fairness
 E.g., existing legal studies define multi-fairness as compound, intersectional
and overlapping (Makkonen, 2002).
 Interaction of bias/fairness with other learning challenges
 High-dimensionality, class-imbalance, non-stationarity, class-overlaps, rare-
classes, ….
 Discriminatory effects over time
 Bias for certain data modalities (e.g., text, images) and multi-modal data
 Benchmarks for proper evaluation
 Different application domains, e.g., education, healthcare, …
47Eirini Ntoutsi Bias in AI-systems: A multi-step approach
Thank you for you attention!
Questions?
48Eirini Ntoutsi Bias in AI-systems: A multi-step approach
https://nobias-project.eu/
@NoBIAS_ITN
https://www.bias-project.org/
Contributed material:
Vasileios, Arjun, Tongxin
Feel free to contact me:
ntoutsi@l3s.de
@entoutsi

More Related Content

What's hot

Ethics in the use of Data & AI
Ethics in the use of Data & AI Ethics in the use of Data & AI
Ethics in the use of Data & AI Kalilur Rahman
 
Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)
Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)
Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)Krishnaram Kenthapadi
 
Ai project | Presentation on AI | Project on Artificial intelligence| College...
Ai project | Presentation on AI | Project on Artificial intelligence| College...Ai project | Presentation on AI | Project on Artificial intelligence| College...
Ai project | Presentation on AI | Project on Artificial intelligence| College...Kiran Banstola
 
Responsible AI
Responsible AIResponsible AI
Responsible AINeo4j
 
Responsible AI
Responsible AIResponsible AI
Responsible AINeo4j
 
Introduction to the ethics of machine learning
Introduction to the ethics of machine learningIntroduction to the ethics of machine learning
Introduction to the ethics of machine learningDaniel Wilson
 
Explainability and bias in AI
Explainability and bias in AIExplainability and bias in AI
Explainability and bias in AIBill Liu
 
Ethical Considerations in the Design of Artificial Intelligence
Ethical Considerations in the Design of Artificial IntelligenceEthical Considerations in the Design of Artificial Intelligence
Ethical Considerations in the Design of Artificial IntelligenceJohn C. Havens
 
Fairness in AI (DDSW 2019)
Fairness in AI (DDSW 2019)Fairness in AI (DDSW 2019)
Fairness in AI (DDSW 2019)GoDataDriven
 
The current state of generative AI
The current state of generative AIThe current state of generative AI
The current state of generative AIBenjaminlapid1
 
General introduction to AI ML DL DS
General introduction to AI ML DL DSGeneral introduction to AI ML DL DS
General introduction to AI ML DL DSRoopesh Kohad
 
PPT on Artificial Intelligence(A.I.)
PPT on Artificial Intelligence(A.I.) PPT on Artificial Intelligence(A.I.)
PPT on Artificial Intelligence(A.I.) Aakanksh Nath
 
Panel: AI for Social Good - Fairness, Ethics, Accountability, and Transparency
Panel: AI for Social Good - Fairness, Ethics, Accountability, and TransparencyPanel: AI for Social Good - Fairness, Ethics, Accountability, and Transparency
Panel: AI for Social Good - Fairness, Ethics, Accountability, and TransparencyAmazon Web Services
 
Ethical issues facing Artificial Intelligence
Ethical issues facing Artificial IntelligenceEthical issues facing Artificial Intelligence
Ethical issues facing Artificial IntelligenceRah Abdelhak
 
AIF360 - Trusted and Fair AI
AIF360 - Trusted and Fair AIAIF360 - Trusted and Fair AI
AIF360 - Trusted and Fair AIAnimesh Singh
 

What's hot (20)

Bias in AI
Bias in AIBias in AI
Bias in AI
 
Ethics in the use of Data & AI
Ethics in the use of Data & AI Ethics in the use of Data & AI
Ethics in the use of Data & AI
 
Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)
Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)
Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)
 
Ai project | Presentation on AI | Project on Artificial intelligence| College...
Ai project | Presentation on AI | Project on Artificial intelligence| College...Ai project | Presentation on AI | Project on Artificial intelligence| College...
Ai project | Presentation on AI | Project on Artificial intelligence| College...
 
Artificial Intelligence
Artificial IntelligenceArtificial Intelligence
Artificial Intelligence
 
The Ethics of AI
The Ethics of AIThe Ethics of AI
The Ethics of AI
 
Artificial intelligence
Artificial intelligenceArtificial intelligence
Artificial intelligence
 
Responsible AI
Responsible AIResponsible AI
Responsible AI
 
Responsible AI
Responsible AIResponsible AI
Responsible AI
 
AI
AIAI
AI
 
Introduction to the ethics of machine learning
Introduction to the ethics of machine learningIntroduction to the ethics of machine learning
Introduction to the ethics of machine learning
 
Explainability and bias in AI
Explainability and bias in AIExplainability and bias in AI
Explainability and bias in AI
 
Ethical Considerations in the Design of Artificial Intelligence
Ethical Considerations in the Design of Artificial IntelligenceEthical Considerations in the Design of Artificial Intelligence
Ethical Considerations in the Design of Artificial Intelligence
 
Fairness in AI (DDSW 2019)
Fairness in AI (DDSW 2019)Fairness in AI (DDSW 2019)
Fairness in AI (DDSW 2019)
 
The current state of generative AI
The current state of generative AIThe current state of generative AI
The current state of generative AI
 
General introduction to AI ML DL DS
General introduction to AI ML DL DSGeneral introduction to AI ML DL DS
General introduction to AI ML DL DS
 
PPT on Artificial Intelligence(A.I.)
PPT on Artificial Intelligence(A.I.) PPT on Artificial Intelligence(A.I.)
PPT on Artificial Intelligence(A.I.)
 
Panel: AI for Social Good - Fairness, Ethics, Accountability, and Transparency
Panel: AI for Social Good - Fairness, Ethics, Accountability, and TransparencyPanel: AI for Social Good - Fairness, Ethics, Accountability, and Transparency
Panel: AI for Social Good - Fairness, Ethics, Accountability, and Transparency
 
Ethical issues facing Artificial Intelligence
Ethical issues facing Artificial IntelligenceEthical issues facing Artificial Intelligence
Ethical issues facing Artificial Intelligence
 
AIF360 - Trusted and Fair AI
AIF360 - Trusted and Fair AIAIF360 - Trusted and Fair AI
AIF360 - Trusted and Fair AI
 

Similar to Bias in AI-systems: A multi-step approach

AAISI AI Colloquium 30/3/2021: Bias in AI systems
AAISI AI Colloquium 30/3/2021: Bias in AI systemsAAISI AI Colloquium 30/3/2021: Bias in AI systems
AAISI AI Colloquium 30/3/2021: Bias in AI systemsEirini Ntoutsi
 
Fairness-aware learning: From single models to sequential ensemble learning a...
Fairness-aware learning: From single models to sequential ensemble learning a...Fairness-aware learning: From single models to sequential ensemble learning a...
Fairness-aware learning: From single models to sequential ensemble learning a...Eirini Ntoutsi
 
Defining the boundary for AI research in Intelligent Systems Dec 2021
Defining the boundary for AI research in Intelligent Systems Dec  2021Defining the boundary for AI research in Intelligent Systems Dec  2021
Defining the boundary for AI research in Intelligent Systems Dec 2021Parasuram Balasubramanian
 
Measures and mismeasures of algorithmic fairness
Measures and mismeasures of algorithmic fairnessMeasures and mismeasures of algorithmic fairness
Measures and mismeasures of algorithmic fairnessManojit Nandi
 
Eliminating Machine Bias - Mary Ann Brennan - ML4ALL 2018
Eliminating Machine Bias - Mary Ann Brennan - ML4ALL 2018Eliminating Machine Bias - Mary Ann Brennan - ML4ALL 2018
Eliminating Machine Bias - Mary Ann Brennan - ML4ALL 2018MaryAnnBrennan3
 
Algorithmic Bias : What is it? Why should we care? What can we do about it?
Algorithmic Bias : What is it? Why should we care? What can we do about it?Algorithmic Bias : What is it? Why should we care? What can we do about it?
Algorithmic Bias : What is it? Why should we care? What can we do about it?University of Minnesota, Duluth
 
Shift AI 2020: How to identify and treat biases in ML Models | Navdeep Sharma...
Shift AI 2020: How to identify and treat biases in ML Models | Navdeep Sharma...Shift AI 2020: How to identify and treat biases in ML Models | Navdeep Sharma...
Shift AI 2020: How to identify and treat biases in ML Models | Navdeep Sharma...Shift Conference
 
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...Krishnaram Kenthapadi
 
Ethical Dimensions of Artificial Intelligence (AI) by Rinshad Choorappara
Ethical Dimensions of Artificial Intelligence (AI) by Rinshad ChoorapparaEthical Dimensions of Artificial Intelligence (AI) by Rinshad Choorappara
Ethical Dimensions of Artificial Intelligence (AI) by Rinshad ChoorapparaRinshad Choorappara
 
The Ethical Maze to Be Navigated: AI Bias in Data Science and Technology.
The Ethical Maze to Be Navigated: AI Bias in Data Science and Technology.The Ethical Maze to Be Navigated: AI Bias in Data Science and Technology.
The Ethical Maze to Be Navigated: AI Bias in Data Science and Technology.sakshibhattco
 
Towards Fairer Datasets: Filtering and Balancing the Distribution of the Peop...
Towards Fairer Datasets: Filtering and Balancing the Distribution of the Peop...Towards Fairer Datasets: Filtering and Balancing the Distribution of the Peop...
Towards Fairer Datasets: Filtering and Balancing the Distribution of the Peop...QuantUniversity
 
TRILcon'17 confernece workshop presentation on UnBias stakeholder engagement
TRILcon'17 confernece workshop presentation on UnBias stakeholder engagementTRILcon'17 confernece workshop presentation on UnBias stakeholder engagement
TRILcon'17 confernece workshop presentation on UnBias stakeholder engagementAnsgar Koene
 
Ethics for Conversational AI
Ethics for Conversational AIEthics for Conversational AI
Ethics for Conversational AIVerena Rieser
 
Fairness and Ethics in A
Fairness and Ethics in AFairness and Ethics in A
Fairness and Ethics in ADaniel Chan
 
e-SIDES presentation at Leiden University 21/09/2017
e-SIDES presentation at Leiden University 21/09/2017e-SIDES presentation at Leiden University 21/09/2017
e-SIDES presentation at Leiden University 21/09/2017e-SIDES.eu
 
[DSC Adria 23] MARIJANA ŠAROLIĆ ROBIĆ Everyone is invited to AI enhanced pres...
[DSC Adria 23] MARIJANA ŠAROLIĆ ROBIĆ Everyone is invited to AI enhanced pres...[DSC Adria 23] MARIJANA ŠAROLIĆ ROBIĆ Everyone is invited to AI enhanced pres...
[DSC Adria 23] MARIJANA ŠAROLIĆ ROBIĆ Everyone is invited to AI enhanced pres...DataScienceConferenc1
 
Data Con LA 2022 - AI Ethics
Data Con LA 2022 - AI EthicsData Con LA 2022 - AI Ethics
Data Con LA 2022 - AI EthicsData Con LA
 
Transparency in ML and AI (humble views from a concerned academic)
Transparency in ML and AI (humble views from a concerned academic)Transparency in ML and AI (humble views from a concerned academic)
Transparency in ML and AI (humble views from a concerned academic)Paolo Missier
 

Similar to Bias in AI-systems: A multi-step approach (20)

AAISI AI Colloquium 30/3/2021: Bias in AI systems
AAISI AI Colloquium 30/3/2021: Bias in AI systemsAAISI AI Colloquium 30/3/2021: Bias in AI systems
AAISI AI Colloquium 30/3/2021: Bias in AI systems
 
Fairness-aware learning: From single models to sequential ensemble learning a...
Fairness-aware learning: From single models to sequential ensemble learning a...Fairness-aware learning: From single models to sequential ensemble learning a...
Fairness-aware learning: From single models to sequential ensemble learning a...
 
Defining the boundary for AI research in Intelligent Systems Dec 2021
Defining the boundary for AI research in Intelligent Systems Dec  2021Defining the boundary for AI research in Intelligent Systems Dec  2021
Defining the boundary for AI research in Intelligent Systems Dec 2021
 
Measures and mismeasures of algorithmic fairness
Measures and mismeasures of algorithmic fairnessMeasures and mismeasures of algorithmic fairness
Measures and mismeasures of algorithmic fairness
 
Eliminating Machine Bias - Mary Ann Brennan - ML4ALL 2018
Eliminating Machine Bias - Mary Ann Brennan - ML4ALL 2018Eliminating Machine Bias - Mary Ann Brennan - ML4ALL 2018
Eliminating Machine Bias - Mary Ann Brennan - ML4ALL 2018
 
Algorithmic Bias : What is it? Why should we care? What can we do about it?
Algorithmic Bias : What is it? Why should we care? What can we do about it?Algorithmic Bias : What is it? Why should we care? What can we do about it?
Algorithmic Bias : What is it? Why should we care? What can we do about it?
 
Shift AI 2020: How to identify and treat biases in ML Models | Navdeep Sharma...
Shift AI 2020: How to identify and treat biases in ML Models | Navdeep Sharma...Shift AI 2020: How to identify and treat biases in ML Models | Navdeep Sharma...
Shift AI 2020: How to identify and treat biases in ML Models | Navdeep Sharma...
 
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...
 
Testing slides
Testing slidesTesting slides
Testing slides
 
Ethical Dimensions of Artificial Intelligence (AI) by Rinshad Choorappara
Ethical Dimensions of Artificial Intelligence (AI) by Rinshad ChoorapparaEthical Dimensions of Artificial Intelligence (AI) by Rinshad Choorappara
Ethical Dimensions of Artificial Intelligence (AI) by Rinshad Choorappara
 
The Ethical Maze to Be Navigated: AI Bias in Data Science and Technology.
The Ethical Maze to Be Navigated: AI Bias in Data Science and Technology.The Ethical Maze to Be Navigated: AI Bias in Data Science and Technology.
The Ethical Maze to Be Navigated: AI Bias in Data Science and Technology.
 
Towards Fairer Datasets: Filtering and Balancing the Distribution of the Peop...
Towards Fairer Datasets: Filtering and Balancing the Distribution of the Peop...Towards Fairer Datasets: Filtering and Balancing the Distribution of the Peop...
Towards Fairer Datasets: Filtering and Balancing the Distribution of the Peop...
 
TRILcon'17 confernece workshop presentation on UnBias stakeholder engagement
TRILcon'17 confernece workshop presentation on UnBias stakeholder engagementTRILcon'17 confernece workshop presentation on UnBias stakeholder engagement
TRILcon'17 confernece workshop presentation on UnBias stakeholder engagement
 
Ethics for Conversational AI
Ethics for Conversational AIEthics for Conversational AI
Ethics for Conversational AI
 
Artificial Intelligence (AI) and Bias.pptx
Artificial Intelligence (AI) and Bias.pptxArtificial Intelligence (AI) and Bias.pptx
Artificial Intelligence (AI) and Bias.pptx
 
Fairness and Ethics in A
Fairness and Ethics in AFairness and Ethics in A
Fairness and Ethics in A
 
e-SIDES presentation at Leiden University 21/09/2017
e-SIDES presentation at Leiden University 21/09/2017e-SIDES presentation at Leiden University 21/09/2017
e-SIDES presentation at Leiden University 21/09/2017
 
[DSC Adria 23] MARIJANA ŠAROLIĆ ROBIĆ Everyone is invited to AI enhanced pres...
[DSC Adria 23] MARIJANA ŠAROLIĆ ROBIĆ Everyone is invited to AI enhanced pres...[DSC Adria 23] MARIJANA ŠAROLIĆ ROBIĆ Everyone is invited to AI enhanced pres...
[DSC Adria 23] MARIJANA ŠAROLIĆ ROBIĆ Everyone is invited to AI enhanced pres...
 
Data Con LA 2022 - AI Ethics
Data Con LA 2022 - AI EthicsData Con LA 2022 - AI Ethics
Data Con LA 2022 - AI Ethics
 
Transparency in ML and AI (humble views from a concerned academic)
Transparency in ML and AI (humble views from a concerned academic)Transparency in ML and AI (humble views from a concerned academic)
Transparency in ML and AI (humble views from a concerned academic)
 

More from Eirini Ntoutsi

Sentiment Analysis of Social Media Content: A multi-tool for listening to you...
Sentiment Analysis of Social Media Content: A multi-tool for listening to you...Sentiment Analysis of Social Media Content: A multi-tool for listening to you...
Sentiment Analysis of Social Media Content: A multi-tool for listening to you...Eirini Ntoutsi
 
A Machine Learning Primer,
A Machine Learning Primer,A Machine Learning Primer,
A Machine Learning Primer,Eirini Ntoutsi
 
(Machine)Learning with limited labels(Machine)Learning with limited labels(Ma...
(Machine)Learning with limited labels(Machine)Learning with limited labels(Ma...(Machine)Learning with limited labels(Machine)Learning with limited labels(Ma...
(Machine)Learning with limited labels(Machine)Learning with limited labels(Ma...Eirini Ntoutsi
 
Redundancies in Data and their E ect on the Evaluation of Recommendation Syst...
Redundancies in Data and their Eect on the Evaluation of Recommendation Syst...Redundancies in Data and their Eect on the Evaluation of Recommendation Syst...
Redundancies in Data and their E ect on the Evaluation of Recommendation Syst...Eirini Ntoutsi
 
Density-based Projected Clustering over High Dimensional Data Streams
Density-based Projected Clustering over High Dimensional Data StreamsDensity-based Projected Clustering over High Dimensional Data Streams
Density-based Projected Clustering over High Dimensional Data StreamsEirini Ntoutsi
 
Density Based Subspace Clustering Over Dynamic Data
Density Based Subspace Clustering Over Dynamic DataDensity Based Subspace Clustering Over Dynamic Data
Density Based Subspace Clustering Over Dynamic DataEirini Ntoutsi
 
Summarizing Cluster Evolution in Dynamic Environments
Summarizing Cluster Evolution in Dynamic EnvironmentsSummarizing Cluster Evolution in Dynamic Environments
Summarizing Cluster Evolution in Dynamic EnvironmentsEirini Ntoutsi
 
Cluster formation over huge volatile robotic data
Cluster formation over huge volatile robotic data Cluster formation over huge volatile robotic data
Cluster formation over huge volatile robotic data Eirini Ntoutsi
 
Discovering and Monitoring Product Features and the Opinions on them with OPI...
Discovering and Monitoring Product Features and the Opinions on them with OPI...Discovering and Monitoring Product Features and the Opinions on them with OPI...
Discovering and Monitoring Product Features and the Opinions on them with OPI...Eirini Ntoutsi
 

More from Eirini Ntoutsi (10)

Sentiment Analysis of Social Media Content: A multi-tool for listening to you...
Sentiment Analysis of Social Media Content: A multi-tool for listening to you...Sentiment Analysis of Social Media Content: A multi-tool for listening to you...
Sentiment Analysis of Social Media Content: A multi-tool for listening to you...
 
A Machine Learning Primer,
A Machine Learning Primer,A Machine Learning Primer,
A Machine Learning Primer,
 
(Machine)Learning with limited labels(Machine)Learning with limited labels(Ma...
(Machine)Learning with limited labels(Machine)Learning with limited labels(Ma...(Machine)Learning with limited labels(Machine)Learning with limited labels(Ma...
(Machine)Learning with limited labels(Machine)Learning with limited labels(Ma...
 
Redundancies in Data and their E ect on the Evaluation of Recommendation Syst...
Redundancies in Data and their Eect on the Evaluation of Recommendation Syst...Redundancies in Data and their Eect on the Evaluation of Recommendation Syst...
Redundancies in Data and their E ect on the Evaluation of Recommendation Syst...
 
Density-based Projected Clustering over High Dimensional Data Streams
Density-based Projected Clustering over High Dimensional Data StreamsDensity-based Projected Clustering over High Dimensional Data Streams
Density-based Projected Clustering over High Dimensional Data Streams
 
Density Based Subspace Clustering Over Dynamic Data
Density Based Subspace Clustering Over Dynamic DataDensity Based Subspace Clustering Over Dynamic Data
Density Based Subspace Clustering Over Dynamic Data
 
Summarizing Cluster Evolution in Dynamic Environments
Summarizing Cluster Evolution in Dynamic EnvironmentsSummarizing Cluster Evolution in Dynamic Environments
Summarizing Cluster Evolution in Dynamic Environments
 
Cluster formation over huge volatile robotic data
Cluster formation over huge volatile robotic data Cluster formation over huge volatile robotic data
Cluster formation over huge volatile robotic data
 
Discovering and Monitoring Product Features and the Opinions on them with OPI...
Discovering and Monitoring Product Features and the Opinions on them with OPI...Discovering and Monitoring Product Features and the Opinions on them with OPI...
Discovering and Monitoring Product Features and the Opinions on them with OPI...
 
NeeMo@OpenCoffeeXX
NeeMo@OpenCoffeeXXNeeMo@OpenCoffeeXX
NeeMo@OpenCoffeeXX
 

Recently uploaded

Cheap Rate Call girls Sarita Vihar Delhi 9205541914 shot 1500 night
Cheap Rate Call girls Sarita Vihar Delhi 9205541914 shot 1500 nightCheap Rate Call girls Sarita Vihar Delhi 9205541914 shot 1500 night
Cheap Rate Call girls Sarita Vihar Delhi 9205541914 shot 1500 nightDelhi Call girls
 
Halmar dropshipping via API with DroFx
Halmar  dropshipping  via API with DroFxHalmar  dropshipping  via API with DroFx
Halmar dropshipping via API with DroFxolyaivanovalion
 
CebaBaby dropshipping via API with DroFX.pptx
CebaBaby dropshipping via API with DroFX.pptxCebaBaby dropshipping via API with DroFX.pptx
CebaBaby dropshipping via API with DroFX.pptxolyaivanovalion
 
VidaXL dropshipping via API with DroFx.pptx
VidaXL dropshipping via API with DroFx.pptxVidaXL dropshipping via API with DroFx.pptx
VidaXL dropshipping via API with DroFx.pptxolyaivanovalion
 
Capstone Project on IBM Data Analytics Program
Capstone Project on IBM Data Analytics ProgramCapstone Project on IBM Data Analytics Program
Capstone Project on IBM Data Analytics ProgramMoniSankarHazra
 
Invezz.com - Grow your wealth with trading signals
Invezz.com - Grow your wealth with trading signalsInvezz.com - Grow your wealth with trading signals
Invezz.com - Grow your wealth with trading signalsInvezz1
 
꧁❤ Greater Noida Call Girls Delhi ❤꧂ 9711199171 ☎️ Hard And Sexy Vip Call
꧁❤ Greater Noida Call Girls Delhi ❤꧂ 9711199171 ☎️ Hard And Sexy Vip Call꧁❤ Greater Noida Call Girls Delhi ❤꧂ 9711199171 ☎️ Hard And Sexy Vip Call
꧁❤ Greater Noida Call Girls Delhi ❤꧂ 9711199171 ☎️ Hard And Sexy Vip Callshivangimorya083
 
Call me @ 9892124323 Cheap Rate Call Girls in Vashi with Real Photo 100% Secure
Call me @ 9892124323  Cheap Rate Call Girls in Vashi with Real Photo 100% SecureCall me @ 9892124323  Cheap Rate Call Girls in Vashi with Real Photo 100% Secure
Call me @ 9892124323 Cheap Rate Call Girls in Vashi with Real Photo 100% SecurePooja Nehwal
 
CALL ON ➥8923113531 🔝Call Girls Chinhat Lucknow best sexual service Online
CALL ON ➥8923113531 🔝Call Girls Chinhat Lucknow best sexual service OnlineCALL ON ➥8923113531 🔝Call Girls Chinhat Lucknow best sexual service Online
CALL ON ➥8923113531 🔝Call Girls Chinhat Lucknow best sexual service Onlineanilsa9823
 
BigBuy dropshipping via API with DroFx.pptx
BigBuy dropshipping via API with DroFx.pptxBigBuy dropshipping via API with DroFx.pptx
BigBuy dropshipping via API with DroFx.pptxolyaivanovalion
 
Best VIP Call Girls Noida Sector 39 Call Me: 8448380779
Best VIP Call Girls Noida Sector 39 Call Me: 8448380779Best VIP Call Girls Noida Sector 39 Call Me: 8448380779
Best VIP Call Girls Noida Sector 39 Call Me: 8448380779Delhi Call girls
 
ALSO dropshipping via API with DroFx.pptx
ALSO dropshipping via API with DroFx.pptxALSO dropshipping via API with DroFx.pptx
ALSO dropshipping via API with DroFx.pptxolyaivanovalion
 
VIP Call Girls Service Miyapur Hyderabad Call +91-8250192130
VIP Call Girls Service Miyapur Hyderabad Call +91-8250192130VIP Call Girls Service Miyapur Hyderabad Call +91-8250192130
VIP Call Girls Service Miyapur Hyderabad Call +91-8250192130Suhani Kapoor
 
April 2024 - Crypto Market Report's Analysis
April 2024 - Crypto Market Report's AnalysisApril 2024 - Crypto Market Report's Analysis
April 2024 - Crypto Market Report's Analysismanisha194592
 
Schema on read is obsolete. Welcome metaprogramming..pdf
Schema on read is obsolete. Welcome metaprogramming..pdfSchema on read is obsolete. Welcome metaprogramming..pdf
Schema on read is obsolete. Welcome metaprogramming..pdfLars Albertsson
 
Introduction-to-Machine-Learning (1).pptx
Introduction-to-Machine-Learning (1).pptxIntroduction-to-Machine-Learning (1).pptx
Introduction-to-Machine-Learning (1).pptxfirstjob4
 
(PARI) Call Girls Wanowrie ( 7001035870 ) HI-Fi Pune Escorts Service
(PARI) Call Girls Wanowrie ( 7001035870 ) HI-Fi Pune Escorts Service(PARI) Call Girls Wanowrie ( 7001035870 ) HI-Fi Pune Escorts Service
(PARI) Call Girls Wanowrie ( 7001035870 ) HI-Fi Pune Escorts Serviceranjana rawat
 
BabyOno dropshipping via API with DroFx.pptx
BabyOno dropshipping via API with DroFx.pptxBabyOno dropshipping via API with DroFx.pptx
BabyOno dropshipping via API with DroFx.pptxolyaivanovalion
 
Chintamani Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore ...
Chintamani Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore ...Chintamani Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore ...
Chintamani Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore ...amitlee9823
 

Recently uploaded (20)

Cheap Rate Call girls Sarita Vihar Delhi 9205541914 shot 1500 night
Cheap Rate Call girls Sarita Vihar Delhi 9205541914 shot 1500 nightCheap Rate Call girls Sarita Vihar Delhi 9205541914 shot 1500 night
Cheap Rate Call girls Sarita Vihar Delhi 9205541914 shot 1500 night
 
Halmar dropshipping via API with DroFx
Halmar  dropshipping  via API with DroFxHalmar  dropshipping  via API with DroFx
Halmar dropshipping via API with DroFx
 
Sampling (random) method and Non random.ppt
Sampling (random) method and Non random.pptSampling (random) method and Non random.ppt
Sampling (random) method and Non random.ppt
 
CebaBaby dropshipping via API with DroFX.pptx
CebaBaby dropshipping via API with DroFX.pptxCebaBaby dropshipping via API with DroFX.pptx
CebaBaby dropshipping via API with DroFX.pptx
 
VidaXL dropshipping via API with DroFx.pptx
VidaXL dropshipping via API with DroFx.pptxVidaXL dropshipping via API with DroFx.pptx
VidaXL dropshipping via API with DroFx.pptx
 
Capstone Project on IBM Data Analytics Program
Capstone Project on IBM Data Analytics ProgramCapstone Project on IBM Data Analytics Program
Capstone Project on IBM Data Analytics Program
 
Invezz.com - Grow your wealth with trading signals
Invezz.com - Grow your wealth with trading signalsInvezz.com - Grow your wealth with trading signals
Invezz.com - Grow your wealth with trading signals
 
꧁❤ Greater Noida Call Girls Delhi ❤꧂ 9711199171 ☎️ Hard And Sexy Vip Call
꧁❤ Greater Noida Call Girls Delhi ❤꧂ 9711199171 ☎️ Hard And Sexy Vip Call꧁❤ Greater Noida Call Girls Delhi ❤꧂ 9711199171 ☎️ Hard And Sexy Vip Call
꧁❤ Greater Noida Call Girls Delhi ❤꧂ 9711199171 ☎️ Hard And Sexy Vip Call
 
Call me @ 9892124323 Cheap Rate Call Girls in Vashi with Real Photo 100% Secure
Call me @ 9892124323  Cheap Rate Call Girls in Vashi with Real Photo 100% SecureCall me @ 9892124323  Cheap Rate Call Girls in Vashi with Real Photo 100% Secure
Call me @ 9892124323 Cheap Rate Call Girls in Vashi with Real Photo 100% Secure
 
CALL ON ➥8923113531 🔝Call Girls Chinhat Lucknow best sexual service Online
CALL ON ➥8923113531 🔝Call Girls Chinhat Lucknow best sexual service OnlineCALL ON ➥8923113531 🔝Call Girls Chinhat Lucknow best sexual service Online
CALL ON ➥8923113531 🔝Call Girls Chinhat Lucknow best sexual service Online
 
BigBuy dropshipping via API with DroFx.pptx
BigBuy dropshipping via API with DroFx.pptxBigBuy dropshipping via API with DroFx.pptx
BigBuy dropshipping via API with DroFx.pptx
 
Best VIP Call Girls Noida Sector 39 Call Me: 8448380779
Best VIP Call Girls Noida Sector 39 Call Me: 8448380779Best VIP Call Girls Noida Sector 39 Call Me: 8448380779
Best VIP Call Girls Noida Sector 39 Call Me: 8448380779
 
ALSO dropshipping via API with DroFx.pptx
ALSO dropshipping via API with DroFx.pptxALSO dropshipping via API with DroFx.pptx
ALSO dropshipping via API with DroFx.pptx
 
VIP Call Girls Service Miyapur Hyderabad Call +91-8250192130
VIP Call Girls Service Miyapur Hyderabad Call +91-8250192130VIP Call Girls Service Miyapur Hyderabad Call +91-8250192130
VIP Call Girls Service Miyapur Hyderabad Call +91-8250192130
 
April 2024 - Crypto Market Report's Analysis
April 2024 - Crypto Market Report's AnalysisApril 2024 - Crypto Market Report's Analysis
April 2024 - Crypto Market Report's Analysis
 
Schema on read is obsolete. Welcome metaprogramming..pdf
Schema on read is obsolete. Welcome metaprogramming..pdfSchema on read is obsolete. Welcome metaprogramming..pdf
Schema on read is obsolete. Welcome metaprogramming..pdf
 
Introduction-to-Machine-Learning (1).pptx
Introduction-to-Machine-Learning (1).pptxIntroduction-to-Machine-Learning (1).pptx
Introduction-to-Machine-Learning (1).pptx
 
(PARI) Call Girls Wanowrie ( 7001035870 ) HI-Fi Pune Escorts Service
(PARI) Call Girls Wanowrie ( 7001035870 ) HI-Fi Pune Escorts Service(PARI) Call Girls Wanowrie ( 7001035870 ) HI-Fi Pune Escorts Service
(PARI) Call Girls Wanowrie ( 7001035870 ) HI-Fi Pune Escorts Service
 
BabyOno dropshipping via API with DroFx.pptx
BabyOno dropshipping via API with DroFx.pptxBabyOno dropshipping via API with DroFx.pptx
BabyOno dropshipping via API with DroFx.pptx
 
Chintamani Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore ...
Chintamani Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore ...Chintamani Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore ...
Chintamani Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore ...
 

Bias in AI-systems: A multi-step approach

  • 1. NL4XAI Workshop, co-located with INLG2020 (online), 18.12.2020 Bias in AI-systems: A multi-step approach Eirini Ntoutsi Leibniz University Hannover & L3S Research Center ITN Network NoBIAS- Artificial Intelligence without BIAS
  • 2. Outline  Introduction  Dealing with bias in data-driven AI systems  Understanding bias  Mitigating bias  Accounting for bias  Wrapping up 2Eirini Ntoutsi Bias in AI-systems: A multi-step approach
  • 3. Successful applications 3Eirini Ntoutsi Bias in AI-systems: A multi-step approach Recommendations Navigation Severe weather alerts Automation
  • 4. Questionable uses/ failures 4Eirini Ntoutsi Bias in AI-systems: A multi-step approach Google flu trends failure Microsoft’s bot Tay taken offline after racist tweets IBM’s Watson for Oncology cancelled Facial recognition works better for white males
  • 5. Why AI-projects might fail?  Back to basics: How machines learn  Machine Learning gives computers the ability to learn without being explicitly programmed (Arthur Samuel, 1959)  We don’t codify the solution, we don’t even know it!  DATA & the learning algorithms are the keys 5Eirini Ntoutsi Bias in AI-systems: A multi-step approach Algorithms Models Models Data
  • 6. Watch out for (hidden) assumptions  Assumptions include stationarity, independent & identically distributed data, ...  In this talk, I will focus on the assumption/myth of algorithmic objectivity 1. The common misconception that humans are subjective, but data and algorithms not and therefore they cannot discriminate. 6Eirini Ntoutsi Bias in AI-systems: A multi-step approach
  • 7. Reality check: Can algorithms discriminate?  Bloomberg analysts compared Amazon same-day delivery areas with U.S. Census Bureau data  They found that in 6 major same-day delivery cities, the service area excludes predominantly black ZIP codes to varying degrees.  Shouldn’t this service be based on customer’s spend rather than race?  Amazon claimed that race was not used in their models. 7 Source: https://www.bloomberg.com/graphics/2016-amazon-same-day/ Eirini Ntoutsi Bias in AI-systems: A multi-step approach
  • 8. Reality check cont’: Can algorithms discriminate?  There have been already plenty of cases of algorithmic discrimination  State of the art visions systems (used e.g. in autonomous driving) recognize better white males than black women (racial and gender bias)  Google’s AdFisher tool for serving personalized ads was found to serve significantly fewer ads for high paid jobs to women than men (gender-bias)  COMPAS tool (US) for predicting a defendant’s risk of committing another crime predicted higher risks of recidivism for black defendants (and lower for white defendants) than their actual risk (racial-bias) 8Eirini Ntoutsi Bias in AI-systems: A multi-step approach
  • 9. Dont blame (only) the AI  “Bias is as old as human civilization” and “it is human nature for members of the dominant majority to be oblivious to the experiences of other groups”  Human bias: a prejudice in favour of or against one thing, person, or group compared with another usually in a way that’s considered to be unfair.  Bias triggers (protected attributes): ethnicity, race, age, gender, religion, sexual orientation …  Algorithmic bias: the inclination or prejudice of a decision made by an AI system which is for or against one person or group, especially in a way considered to be unfair. 9Eirini Ntoutsi Bias in AI-systems: A multi-step approach
  • 10. Bias, an overloaded term  Inductive bias ”refers to a set of (explicit or implicit) assumptions made by a learning algorithm in order to perform induction, that is, to generalize a finite set of observation (training data) into a general model of the domain. Without a bias of that kind, induction would not be possible, since the observations can normally be generalized in many ways.” (Hüllermeier, Fober & Mernberger, 2013)  Bias-free learning is futile: A learner that makes no a priori assumptions regarding the identity of the target concept has no rational basis for classifying any unseen instances.  Some biases are positive and helpful, e.g., making healthy eating choices  We refer here to bias that might cause discrimination and unfair actions to an individual or group on the basis of protected attributes like race or gender 10Eirini Ntoutsi Bias in AI-systems: A multi-step approach
  • 11. Outline  Introduction  Dealing with bias in data-driven AI systems  Understanding bias  Mitigating bias  Accounting for bias  Wrapping up 11Eirini Ntoutsi Bias in AI-systems: A multi-step approach
  • 12. Dealing with bias in data-driven AI systems  Fairness-aware machine learning: a young, fast evolving, multi-disciplinary research field  Bias/fairness/discrimination/… have been studied for long in philosophy, social sciences, law, …  Existing approaches can be divided into three categories  Understanding bias  How bias is created in the society and enters our sociotechnical systems, is manifested in the data used by AI algorithms, and can be formalized.  Mitigating bias  Approaches that tackle bias in different stages of AI-decision making.  Accounting for bias  Approaches that account for bias proactively or retroactively. 12Eirini Ntoutsi Bias in AI-systems: A multi-step approach
  • 13. Outline  Introduction  Dealing with bias in data-driven AI systems  Understanding bias  Mitigating bias  Accounting for bias  Wrapping up 13Eirini Ntoutsi Bias in AI-systems: A multi-step approach
  • 14. Understanding bias: Sociotechnical causes of bias  AI-systems rely on data generated by humans (UGC) or collected via systems created by humans.  As a result human biases  enter AI systems  E.g., bias in word-embeddings (Bolukbasi et al, 2016)  might be amplified by complex sociotechnical systems such as the Web and,  new types of biases might be created 14Eirini Ntoutsi Bias in AI-systems: A multi-step approach
  • 15. Understanding bias: How is bias manifested in data?  Protected attributes and proxies  E.g., neighborhoods in U.S. cities are highly correlated with race  Representativeness of data  E.g., underrepresentation of women and people of color in IT developer communities and image datasets  E.g., overrepresentation of black people in drug-related arrests  Depends on data modalities 15Eirini Ntoutsi Bias in AI-systems: A multi-step approach https://incitrio.com/top-3-lessons-learned-from- the-top-12-marketing-campaigns-ever/ https://ellengau.medium.com/emily-in- paris-asian-women-i-know-arent-like- mindy-chen-6228e63da333
  • 16. Typical setup for fairness-aware learning  D = training dataset drawn from a joint distribution P(F,S,y)  S = (binary, single) protected attribute  s (s ̄): protected (non-protected) group  F = set of non-protected attributes  y = target (binary) class (+ for accepted, - for rejected)  Goal: Learn a mapping from f(): f(F, S) = ỹ → y that,  achieves good predictive performance  eliminates discrimination 16Eirini Ntoutsi Bias in AI-systems: A multi-step approach F1 F2 S y User1 f11 f12 s + User2 f21 - User3 f31 f23 s + … … … … … Usern fn1 +
  • 17. Understanding bias: How is fairness defined?  Types of fairness measures: group fairness, individual fairness  Group fairness: protected and non-protected groups should be treated similarly  Representative measures: statistical parity, equal opportunity, equalized odds  Main critic: when focusing on the group less qualified members may be chosen  Individual fairness: similar individuals should be treated similarly  Representative measures: counterfactual fairness  Main critic: it is hard to evaluate proximity of instances (M. Kim et al, NIPS 2018) 17Eirini Ntoutsi Bias in AI-systems: A multi-step approach
  • 18. Understanding bias: How is fairness defined?  A variety of measures based on  The prediction of the classifiers  focus on the predicted outcome for different groups  The training set and the prediction of the classifiers  not only consider the predicted outcome, but also compare it to the actual outcome  The training set and the predicted probabilities of the classifiers  consider the actual outcome and the predicted probability score 18Eirini Ntoutsi Bias in AI-systems: A multi-step approach
  • 19. Understanding bias: How is fairness defined?  Statistical parity: If subjects in both protected and unprotected groups should have equal probability of being assigned to the positive class (Dwork et al, 2012)  Equalized Odds: There should be no difference in model’s prediction errors between protected and non-protected groups for both classes (Hardt et al., NIPS 16): 19Eirini Ntoutsi Bias in AI-systems: A multi-step approach P(Ӯ = +|S = s) = P(Ӯ = +|S = s ̄)
  • 20. Outline  Introduction  Dealing with bias in data-driven AI systems  Understanding bias  Mitigating bias  Accounting for bias  Wrapping up 20Eirini Ntoutsi Bias in AI-systems: A multi-step approach
  • 21. Mitigating bias  Goal: tackling bias in different stages of AI-decision making 21Eirini Ntoutsi Bias in AI-systems: A multi-step approach Algorithms Models Models Data Applications Hiring Banking Healthcare Education Autonomous driving … Pre-processing approaches In-processing approaches Post-processing approaches
  • 22. Mitigating bias: pre-processing approaches  Intuition: making the data more fair will result in a less unfair model  Idea: balance the protected and non-protected groups in the dataset  Design principle: minimal data interventions (to retain data utility for the learning task)  Different techniques:  Instance class modification (massaging), (Kamiran & Calders, 2009),(Luong, Ruggieri, & Turini, 2011)  Instance selection (sampling), (Kamiran & Calders, 2010) (Kamiran & Calders, 2012)  Instance weighting, (Calders, Kamiran, & Pechenizkiy, 2009)  Synthetic instance generation (Iosifidis & Ntoutsi, 2018)  … 22Eirini Ntoutsi Bias in AI-systems: A multi-step approach
  • 23. Mitigating bias: pre-processing approaches: Massaging  Change the class label of carefully selected instances (Kamiran & Calders, 2009).  The selection is based on a ranker which ranks the individuals by their probability to receive the favourable outcome.  The number of massaged instances depends on the fairness measure (group fairness) 23Eirini Ntoutsi Bias in AI-systems: A multi-step approach Image credit Vasileios Iosifidis
  • 24. Mitigating bias: pre-processing approaches: discussion  Most of the techniques are heuristics and the impact of the interventions is not well controlled  Approaches also exist that change the data towards fairness while controlling the per-instance distortion and by preserving data utility, (Calmon et al, 2017). 24Eirini Ntoutsi Bias in AI-systems: A multi-step approach
  • 25. Mitigating bias  Goal: tackling bias in different stages of AI-decision making 25Eirini Ntoutsi Bias in AI-systems: A multi-step approach Algorithms Models Models Data Applications Hiring Banking Healthcare Education Autonomous driving … Pre-processing approaches In-processing approaches Post-processing approaches
  • 26. Mitigating bias: in-processing approaches  Intuition: working directly with the algorithm allows for better control  Idea: explicitly incorporate the model’s discrimination behavior in the objective function  Design principle: “balancing” predictive- and fairness-performance  Different techniques:  Regularization (Kamiran, Calders & Pechenizkiy, 2010),(Kamishima, Akaho, Asoh & Sakuma, 2012), (Dwork, Hardt, Pitassi, Reingold & Zemel, 2012) (Zhang & Ntoutsi, 2019)  Constraints (Zafar, Valera, Gomez-Rodriguez & Gummadi, 2017)  training on latent target labels (Krasanakis, Xioufis, Papadopoulos & Kompatsiaris, 2018)  In-training altering of data distribution (Iosifidis & Ntoutsi, 2019)  … 26Eirini Ntoutsi Bias in AI-systems: A multi-step approach
  • 27. Case: Cumulative fairness adaptive boosting  We assume a training dataset D drawn from a joint distribution P(F,S,y)  Let S ϵ {s, ҧ𝑠} be a binary protected attribute, where s is the protected group.  F is the set of non-protected attributes.  We assume a binary class: y ϵ {+,-}, + for accepted, - for rejected respectively. 27 F1 F2 S y User1 f11 f12 s + User2 f21 ҧ𝑠 - User3 f31 f23 s + … … … … … Usern fn1 ҧ𝑠 + V. Iosifidis, E. Ntoutsi, “AdaFair: Cumulative Fairness Adaptive Boosting", ACM CIKM 2019. Eirini Ntoutsi Bias in AI-systems: A multi-step approach
  • 28. Fairness problem formulation  Our fairness measure is Equalized Odds which measures the difference in model’s prediction errors between protected and non-protected groups for both classes:  Smaller values are better (ideally Eq.Odds = 0) 28 [Iosifidis and Ntoutsi, CIKM 2019] Eirini Ntoutsi Bias in AI-systems: A multi-step approach
  • 29. Limitations of related work  Existing works evaluate predictive performance in terms of the overall classification error rate (ER), e.g., [Calders et al’09, Calmon et al’17, Fish et al’16, Hardt et al’16, Krasanakis et al’18, Zafar et al’17]  In case of class-imbalance, ER is misleading  Most of the datasets however suffer from imbalance 29 [Iosifidis and Ntoutsi, CIKM 2019] Eirini Ntoutsi Bias in AI-systems: A multi-step approach
  • 30. AdaFair  Our base model is AdaBoost (Freund and Schapire, 1995), a sequential ensemble method that in each round, re-weights the training data to focus on misclassified instances.  We tailor AdaBoost to fairness  We introduce the notion of cumulative fairness that assesses the fairness of the model up to the current boosting round.  We directly incorporate fairness in the instance weighting process (traditionally focusing on classification performance).  We optimize the number of weak learners in the final ensemble based on balanced error rate thus directly considering class imbalance in the best model selection. 30 [Iosifidis and Ntoutsi, CIKM 2019] Round 1: Weak learner h1 Round 2: Weak learner h2 Round 3: Weak learner h3 Final strong learner H() Eirini Ntoutsi Bias in AI-systems: A multi-step approach
  • 31. AdaFair: Cumulative boosting fairness  Let j: 1−T be the current boosting round, T is user defined  Let be the partial ensemble, up to current round j.  The cumulative fairness of the ensemble up to round j, is defined based on the parity in the predictions of weak learners h1() … hj() between protected and non-protected groups  ``Forcing’’ the model to consider ``historical’’ fairness over all previous rounds instead of just focusing on current round hj() results in better classifier performance and model convergence. 31 [Iosifidis and Ntoutsi, CIKM 2019] Eirini Ntoutsi Bias in AI-systems: A multi-step approach
  • 32. AdaFair: fairness-aware weighting of instances  Vanilla AdaBoost already boosts misclassified instances for the next round  Our weighting explicitly targets fairness by extra boosting discriminated groups for the next round  The data distribution at boosting round j+1 is updated as follows  The fairness-related cost ui of instances xi ϵ D which belong to a group that is discriminated is defined as follows: 32 [Iosifidis and Ntoutsi, CIKM 2019] Eirini Ntoutsi Bias in AI-systems: A multi-step approach
  • 33. Experiments: Predictive and fairness performance  Adult census income (ratio 1+:3-)  Bank dataset (ratio 1+:8-) 34 Larger values are better, for Eq.Odds lower values are better  Our method achieves high balanced accuracy and low discrimination (Eq.Odds) while maintaining high TPRs and TNRs for both groups.  The methods of Zafar et al and Krasanakis et al, eliminate discrimination by rejecting more positive instances (lowering TPRs). [Iosifidis and Ntoutsi, CIKM 2019] Eirini Ntoutsi Bias in AI-systems: A multi-step approach
  • 34. Cumulative vs non-cumulative fairness  Cumulative vs non-cumulative fairness impact on model performance  Cumulative notion of fairness performs better  The cumulative model (AdaFair) is more stable than its non-cumulative counterpart (standard deviation is higher) 35 [Iosifidis and Ntoutsi, CIKM 2019] Eirini Ntoutsi Bias in AI-systems: A multi-step approach
  • 35. Case: Adaptive Fairness-aware Decision Tree  From batch fairness solutions to online fairness solutions  Our Fairness-Aware Hoeffding Tree (FAHT) extends the Hoeffding tree (HT) classifier for fairness by directly considering fairness in the splitting criterion  HT uses the Hoeffding bound to decide on when and how to split  Such decisions are based on information gain to optimize predictive performance and do not consider fairness. 36 W. Zhang, E. Ntoutsi, “An Adaptive Fairness-aware Decision Tree Classifier", IJCAI 2019. Eirini Ntoutsi Bias in AI-systems: A multi-step approach Skipped due to time
  • 36. Case: Adaptive Fairness-aware Decision Tree  We introduce the fairness gain of an attribute (FG)  Disc(D) corresponds to group fairness (statistical parity)  We introduce the joint criterion, fair information gain (FIG) that evaluates the suitability of a candidate splitting attribute A in terms of both predictive performance and fairness. 37 W. Zhang, E. Ntoutsi, “An Adaptive Fairness-aware Decision Tree Classifier", IJCAI 2019. Eirini Ntoutsi Bias in AI-systems: A multi-step approach [Zhang and Ntoutsi, IJCAI 2019] Skipped due to time
  • 37. Experiments: Predictive and fairness performance  FAHT is capable of diminishing the discrimination to a lower level while maintaining a fairly comparable accuracy.  FAHT results in a shorter tree comparing to HT, as its splitting criterion FIG is more restrictive comparing to IG. 38Eirini Ntoutsi Bias in AI-systems: A multi-step approach [Zhang and Ntoutsi, IJCAI 2019] Adult dataset Qualitative results: • HT selects “capital-gain” as the root attribute, FAHT selects “Age” • Capital gain is directly related with the annual salary (class) but probably also mirrors intrinsic discrimination of the data Skipped due to time
  • 38. Mitigating bias  Goal: tackling bias in different stages of AI-decision making 39Eirini Ntoutsi Bias in AI-systems: A multi-step approach Algorithms Models Models Data Applications Hiring Banking Healthcare Education Autonomous driving … Pre-processing approaches In-processing approaches Post-processing approaches
  • 39. Mitigating bias: post-processing approaches  Intuition: start with predictive performance  Idea: first optimize the model for predictive performance and then tune for fairness  Design principle: minimal interventions (to retain model predictive performance)  Different techniques:  Correct the confidence scores (Pedreschi, Ruggieri, & Turini, 2009), (Calders & Verwer, 2010)  Correct the class labels (Kamiran et al., 2010)  Change the decision boundary (Kamiran, Mansha, Karim, & Zhang, 2018), (Hardt, Price, & Srebro, 2016)  Wrap a fair classifier on top of a black-box learner (Agarwal, Beygelzimer, Dudík, Langford, & Wallach, 2018)  … 40Eirini Ntoutsi Bias in AI-systems: A multi-step approach
  • 40. Mitigating bias: pοst-processing approaches: shift the decision boundary  An example of decision boundary shift  In this example, we also consider concept drifts (non-stationary data). 41Eirini Ntoutsi Bias in AI-systems: A multi-step approach V. Iosifidis, H.T. Thi Ngoc, E. Ntoutsi, “Fairness-enhancing interventions in stream classification", DEXA 2019.
  • 41. Outline  Introduction  Dealing with bias in data-driven AI systems  Understanding bias  Mitigating bias  Accounting for bias  Wrapping up 42Eirini Ntoutsi Bias in AI-systems: A multi-step approach
  • 42. Accounting for bias  Algorithmic accountability refers to the assignment of responsibility for how an algorithm is created and its impact on society (Kaplan et al, 2019).  Many facets of accountability for AI-driven algorithms and different approaches  Proactive approaches:  Bias-aware data collection, e.g., for Web data, crowd-sourcing  Bias-description and modeling, e.g., via ontologies  ...  Retroactive approaches:  Explaining AI decisions in order to understand whether decisions are biased  What is an explanation? Explanations w.r.t. legal/ethical grounds?  Using explanations for fairness-aware corrections (inspired by Schramowski et al, 2020) 43Eirini Ntoutsi Bias in AI-systems: A multi-step approach
  • 43. A visual overview of topics related to fairness 44Eirini Ntoutsi Bias in AI-systems: A multi-step approach E. Ntoutsi, P. Fafalios, U. Gadiraju, V. Iosifidis, W. Nejdl, M.-E. Vidal, S. Ruggieri, F. Turini, S. Papadopoulos, E. Krasanakis, I. Kompatsiaris, K. Kinder-Kurlanda, C. Wagner, F. Karimi, M. Fernandez, H. Alani, B. Berendt, T. Kruegel, C. Heinze, K. Broelemann, G. Kasneci, T. Tiropanis, S. Staab"Bias in data-driven artificial intelligence systems—An introductory survey", WIREs Data Mining and Knowledge Discovery, 2020.
  • 44. Outline  Introduction  Dealing with bias in data-driven AI systems  Understanding bias  Mitigating bias  Accounting for bias  Wrapping up 45Eirini Ntoutsi Bias in AI-systems: A multi-step approach
  • 45. Wrapping-up  In this talk I focused on the myth of algorithmic objectivity and  the reality of algorithmic bias and discrimination and how algorithms can pick biases existing in the input data and further reinforce them  A large body of research already exists  focusing mainly on fully-supervised learning, single-protected (binary) attributes, binary classes  NoBIAS goal: Develop novel technical methods for AI-based decision making without bias by taking into account ethical and legal considerations in the design of technical solutions  It is important to target bias in each step and collectively, in the whole (machine) learning pipeline 46Eirini Ntoutsi Bias in AI-systems: A multi-step approach
  • 46. Many open challenges and opportunities  Fairness beyond fully-supervised learning  Unsupervised, semi-supervised, RL, …  Fairness beyond single protected attributes  multi-fairness  E.g., existing legal studies define multi-fairness as compound, intersectional and overlapping (Makkonen, 2002).  Interaction of bias/fairness with other learning challenges  High-dimensionality, class-imbalance, non-stationarity, class-overlaps, rare- classes, ….  Discriminatory effects over time  Bias for certain data modalities (e.g., text, images) and multi-modal data  Benchmarks for proper evaluation  Different application domains, e.g., education, healthcare, … 47Eirini Ntoutsi Bias in AI-systems: A multi-step approach
  • 47. Thank you for you attention! Questions? 48Eirini Ntoutsi Bias in AI-systems: A multi-step approach https://nobias-project.eu/ @NoBIAS_ITN https://www.bias-project.org/ Contributed material: Vasileios, Arjun, Tongxin Feel free to contact me: ntoutsi@l3s.de @entoutsi