This presentation seeks to advance the thinking on how financial services firms can implement a framework that supports explainable artificial intelligence (AI), thus building trust among consumers, shareholders and other stakeholders, and helping ensure compliance with emerging regulatory and ethical norms.
Trust, Context and, Regulation: Achieving More Explainable AI in Financial Services
1. Trust, Context and Regulation:
Achieving more explainable AI in financial services
19 November 2020
EY
Sajedah Karim, Ariane Buescher, Ansgar Koene*, Mario
Schlener, Jason Tuo, Luis Pizarro
UK FINANCE
Walter McCahon*, Jonathan Middleton, Oliver Nelson-Smith
*presenters
2. Growing public interest in algorithmic decision-making, and growing scrutiny
Trust, Context and Regulation
Source: EY 2018 Growth Barometer.
of CEOs are already adopting or planning to
adopt intelligent automation or machine
learning in the near term (two years)73%
Source: Gartner’s 2018 CIO Agenda Survey.
of AI projects through 2022 will deliver
erroneous outcomes due to bias in
data, algorithms or development teams85%
2
3. Introducing our paper – bringing together technical, regulatory and governance considerations
Trust, Context and Regulation
Growing public and regulatory interest in AI
Increasingly complex models
Challenges to achieving transparent and explainable AI, plus some approaches to managing these
Overview of the some of the most comprehensive guidance on explainability – the Information
Commissioner’s Office of the UK
Ways to apply this guidance in practice – process and governance
Use cases – bringing technical, regulatory and customer considerations together
Take a holistic approach to trustworthy AI from the beginning of a project
Understand context, stakeholders and use case
Set up effective governance to determine priority models
Key takeaways:
3
Importance of trust → explainability and fairness
4. Explainability challenges – 4 practical steps
Trust, Context and Regulation
Approaching explanations
Determine what’s in scope
Good governance and clear accountability
Identify the priority models for explanations
Determine what type of explanation – Know
Your Customer and know your use case
1
2
3
4
4
The accuracy-explainability trade-off
5. Different types of explainability – why ensuring trust plays a role
Trust, Context and Regulation
Types of explanations, as identified by ICO
Reason which led to the decision (non-technical
way)
Who is involved in the development,
management and implementation and whom to
contact for human review
What data has been used and how; what data
has been trained and tested
Steps across the design and implementation of
an AI system to ensure its decisions are
unbiased and fair
Design and implementation steps to maximise
accuracy, reliability, security and robustness
Impact that the use of an AI system and its
decisions has or may have on an individual
Rationale
explanation:
Responsibility
explanation:
Data
explanation:
Fairness
explanation:
Safety and
performance
explanation:
Impact
explanation:
Artificial Intelligence (AI) is an umbrella term for a range of algorithm-based
technologies that are designed to mimic human thought to solve complex Tasks
—
Definition of AI for this presentation
The importance to ensure
trust in AI to ensure
increased transparency and
explainability
5
6. How to implement explainability – what it means for financial services
Trust, Context and Regulation
Select priority explanation types considering domain, use case and impact on the individual.
Consider explanations when collecting and pre-processing data.
Consider explanation needs during system build – how to extract the necessary information.
Translate technical explanations into language, graphics, etc., that is understandable and relevant to the audience.
Prepare customer-facing staff to convey explanations.
Consider how to build and present explanations.
6
Understanding the models to explain
Delivering the explanations
7. Trust, Context and Regulation
Case Study: Credit Decisions and Algorithmic Bias
Explanation needs for customers
and consumers
Technical considerations
► “Why was I declined?”
► Balance against risk of ‘gaming the system’
► Integrate all regulatory requirements
► “This doesn’t seem fair…”
► “Why were you looking there?”
► “I think you got this wrong”
► Use the most interpretable model type that can
achieve desired accuracy requirements
► Document data and model section justifications
► Clear internal ‘fairness criteria’
► Layered explanations starting with:
► Basic rationale - main factors
► Light-touch responsibility explanation
► If asked, be ready to explain how fairness is
ensured
7
How to implement explainability – what it means for financial services
8. Trust, Context and Regulation
Case Study: Address Algorithmic Bias in ML Validation
8
How to implement explainability – what it means for financial services
Model transparency Model fairness
The degree to which a human can:
► Understand the decision framework
► Consistently interpret and predict model outcomes
Key ways to improve model transparency:
► Feature importance
► Linkage between input and output
► Surrogate model
► Visualization
Many types of fairness need to be considered e.g.:
► Individual fairness and group fairness
► Fairness of process and fairness of outcomes.
Process
► Begin with a clear and documented statement of the
fairness definition
► The unfairness can be remediated during pre-training, in-
training and post-training stage
► Optimising for one type of fairness may reduce a
different type of fairness – be open about your choices
Trust, Context and Regulation8
9. Key takeaways
Trust, Context and Regulation
Regulators
Explainability requirements should
match regulatory oversight needs
(different types for different
purposes)
Meaningful explanation depends on:
► specific AI application
► sector where it is used
Aim for common standards — align
data ethics and AI guidance across
different authorities
Prioritise applying AI specific guidance
to existing rules and regulations
Align on definitions and terminology
Governance of AI matching:
► purpose
► context
Procedures for trade-offs
between AI trust attributes,
e.g., Explainability vs.
accuracy
Effective explanations
appropriate to:
► users
► internal stakeholders
► external stakeholders
Scoring mechanisms to:
► prioritised models for
explanations
► enhance explainability
Know the limits of AI
explainability
321 4 5
Businesses
9
Comprehensive Trust in AI
principles needed to create
and maintain Trust and
Explainability in the long
term
6
10. Authors
Trust, Context and Regulation
Sajedah Karim
Partner | Financial Services UK
► Compliance Transformation and RegTech Regulatory Lead
► Leading aspects of EY’s Trusted AI solution
Ansgar Koene
EY AI Regulatory Advisor | Global
► Authored reports on AI governance for UK and EU
government
► Chair of IEEE Standard and Certification working groups on
Algorithmic Bias
Ariane Buescher
Assistant Director | Global IT
► Global AI Risk and Product Lead
► Helping clients to ensure emerging technology is used
correctly and safely
Mario Schlener
Partner | Canada
► Risk Management Leader
► AI Risk/Validation Lead and Co-Lead Product Owner of EY’s
AI Product “SHAZAM Model Performance Platform
Jason Tuo
Senior Manager | Canada
► Risk Analytics and Solution Leader
Luis Pizarro
Associate Director | Global IT
► Global Client Technology AI Quality Lead
► UK Client Technology AI Centre Lead
10
EY
UK FINANCE
Walter McCahon Jonathan Middleton Oliver Nelson-Smith
Manager Principle Analyst
► Data Policy ► Technology and Digital Policy Delivery and
Coordination
► Digital, Technology and Cyber
11. Trust, Context and Regulation11
Full paper available at:
https://www.ukfinance.org.uk/policy-
and-guidance/reports-
publications/trust-context-and-
regulation-achieving-more-explainable-
ai-financial-services