Levi Thatcher, PhD, VP of Data Science at Health Catalyst will share practical AI use cases and distill the lessons into a framework you can use when evaluating AI healthcare projects. Specifically, Levi will answer these questions:
What are great healthcare business cases for AI/ML?
What kind of data do you need?
What tools / talent do you need?
How do you integrate AI/ML into the daily workflow?
AI In Healthcare: 4 Real-world Use Cases for Machine Learning
1. AI In Healthcare: Real-world
Machine Learning Use Cases
Levi Thatcher, PhD
Vice President, Data Science,
Health Catalyst
2. Poll Question #1
How would you describe your healthcare organization’s adoption of machine learning (ML)?
188 respondents
a) It’s being discussed but not used – 37%
b) It’s being used by specialized groups – 28%
c) It’s being used broadly, but isn’t accelerating outcomes improvements – 3%
d) It’s being used broadly, and is consistently accelerating outcomes improvements – 4%
e) Unsure or not applicable28%
3. Learning Objectives
• Identify business use cases for ML in healthcare
• Understand what kind of data is needed
• Identify what tools/talent is required
• Demonstrate how to integrate ML into daily workflow
4. AI vs ML?
• In this talk, we’ll consider them the same
7. What Problem Were We Up Against?
The pace of outcomes improvements
The pace we’ve been going.
FastSlow
Descriptive &
Diagnostic
The pace we need to be going.
Predictive &
Prescriptive
8. Where Did We Turn for Help?
The pace of outcomes improvements
FastSlow
Machine
Learning
Predictive &
Prescriptive
Accurate
Scalable
Actionable
9. Too Much Info for One Person to Process
• Difficult to keep up with literature1
• EMRs contain thousands of past patients
• Clinicians often have ~15 min per patient
1: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC521514/
10. What Is ML? Let’s start with a simple example…
• Think of home prices
Data from here: https://www.kaggle.com/c/house-prices-advanced-regression-techniques
12. Are standard models good for us?
• Examples: LACE, SOFA
• Not based on your population
13. Models Should be Local
• Demographics matter
• Treatments vary widely
• Data collection is highly irregular
14. The Turning Point: healthcare.ai
Health Catalyst needed a predictive and prescriptive analytic pipeline that:
• Delivers accurate, scalable, and actionable insights.
• Can be leveraged across any use case.
• Is accessible to data analysts/architects in addition to data scientists.
healthcare.ai is a community with education
and open source technology tools focused on
streamlining the healthcare ML workflow and
increasing the national adoption of machine
learning in healthcare.
16. Essential Elements for Improving Outcomes with ML
Business problem
(use case)
Subject matter
expert(s)
Analyst Tools
(data manipulation tool,
RStudio, healthcare.ai,
and visualization tool)
Data
17. 1 2 3 4
Four Steps to Incorporate ML into Outcomes
Improvements
*Note: Consider user adoption during each step.
Choose a business
problem (use case).
Organize a dataset. Develop and deploy
a model.
Surface the insight
and guidance.
18. What Types of Problems Are Best Solved with ML?
• Worklist that could benefit from stratification or prioritization.
• When
1. Staff can’t get through the whole list
2. There are appropriate interventions to be made
3. Interventions are either costly, time-consuming, or not appropriate
for everyone
19. Choose a Use Case
Questions we need to answer to clearly define the use case:
Who will be
impacted by this
use case?
Which
improvement goal
will this use case
impact?
How will this
use case impact
the improvement
goal?
Whose workflow
will this use case
be implemented
in?
At what point in
the workflow will
the use case be
implemented?
20. Use Case: Readmissions
Imagine we’re part of an outcomes improvement team focusing on the heart failure
population.
Who will be
impacted by this
use case?
Heart failure
patients.
Which
improvement goal
will this use case
impact?
How will this
use case impact
the improvement
goal?
Whose workflow
will this use case
be implemented
in?
At what point in
the workflow will
the use case be
implemented?
Reduce
readmissions.
By identifying those
at risk of readmission
and intervening
accordingly.
Nurse
navigators’.
Within 24 hours
of admission or
at discharge.
SEP1b
21. Use Case: Readmissions
Use case: Reduce readmissions for heart failure patients by identifying those at risk of
readmission within 24 hours of admission so that nurse navigators can coordinate
interventions accordingly.
Who will be
impacted by this
use case?
Which
improvement goal
will this use case
impact?
How will this
use case impact
the improvement
goal?
Whose workflow
will this use case
be implemented
in?
At what point in
the workflow will
the use case be
implemented?
STEP1c
22. Poll Question #2
What is the number one use case for ML in your organization? 227 respondents
a) Sepsis and/or hospital acquired infections – 7%
b) Reimbursement – 9%
c) Chronic disease (e.g., heart failure or diabetes) – 21%
d) Readmissions/utilization – 25%
e) Unsure or not listed – 38%
23. Let’s Learn the Framework via Examples
• Worklist that could benefit from stratification or
prioritization.
• When
1. Staff can’t get through the whole list
2. There are appropriate interventions to be made
3. Interventions are either costly, time-consuming,
or not appropriate for everyone
24. • Uncompensated care is placing a huge burden on health systems
• At Intermountain Healthcare, total uncompensated care in 2016 was $568M
(or 13.6% of net patient revenue).1
• From 2014 to 2015 they saw a $63M (or 41%) increase in bad debt.1
• Over the same period net profit dropped $152M (35%) and nearly half of the
drop was related to the increase in bad debt.1
• Staff has no insight into who needs outreach, charity care, balance
modifications, etc
Use Case: Propensity to Pay – Background
1: https://intermountainhealthcare.org/annual-report-2015/intermountain-financial-summary/
25. Use Case: Does P2P Make for a Good ML Use Case?
ML criteria…
• Worklist that could benefit from stratification or prioritization?
• When
1. Staff can’t get through the whole list
2. There are appropriate interventions to be made
3. Interventions are either costly, time-consuming, or not appropriate
for everyone
Yes!
Yes!
Yes!
Yes!
26. Use Case: Propensity to Pay – Model Details
• Trained on 500k accounts
• 68 variables considered; 48 in model
• Area Under ROC Curve: 0.86
• Risk scores for 140k patients per month
• Scores are used to feed custom worklists within the EMR
28. • The propensity to pay scores are combined with business
logic to create custom worklists for personalized follow up
procedures
• These worklists are fed into newly created work queues
within the EMR
Use Case: Propensity to Pay – Output (cont.)
30. • No-show rates at 10-30% nationwide1
• Leads to
• Underutilized equipment
• Increased healthcare costs
• Decreased access to care
• Reduced clinic efficiency and provider productivity
• Daily risk guidance helps via either
• Efficient over-scheduling
• Outreach
No-show risk – Background
1: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4187098/pdf/ACI-05-0836.pdf
31. Does no-show issue make for a good ML use case?
ML criteria…
• Work list that could benefit from stratification or prioritization?
• When
1. Staff can’t get through the whole list
2. There are appropriate interventions to be made
3. Interventions are either costly, time-consuming, or not appropriate
for everyone
Yes!
Yes!
Yes!
Yes!
32. • Trained on 1M+ past appointments
• 30 variables considered; 14 in final model
• Most important: Past no-show count, Days till appt, Cancellations count
• Area under ROC curve (AUC): 0.82; beats literature
• Next-best is Huang model at 0.75 AUC1
• Provides risk score for every outstanding appointment, refreshed daily
• Used by 10 clinics across rural health system in New England.
No-show Risk – Model details
1: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4187098/pdf/ACI-05-0836.pdf
33. Ways ML Adds Value Over Rule-based Models
• Better performance
• Let’s think about false-positive rates
STEP3d
0.50 1.00
Not
Discriminative
Perfectly
Discriminative
0.800.710.60
C-stat = Area Under ROC Curve
EMR LACE healthcare.ai
1.00 0.00
All false
positives
No false positives0.270.450.46
EMR LACE healthcare.ai
Folks that aren’t readmitted
Wasted
intervention
34. • At custom points in workflow
LACE vs healthcare.ai
• More interpretable
STEP3d
Ways ML Adds Value Over Rule-based Models (cont.)
35. Develop and Deploy a Model
Model deployment.
ModelAlgorithmHistorical Data
Actual Outcome Prediction
New Data
Test/Monitor
Training
Deployment
Push predictions to a table.
Implement telemetry.
Confirm “in-the-wild”
performance.
Retrain on a regular basis.
Must be similar to
training/testing.
Saved during
development.
TEP3e
36. Surface the Insight and Guidance
Subject Area Mart
Data Science Engine
(R + healthcare.ai package)
Dashboard /
Risk Prioritized List
Source System
Workflow /
Risk Prioritized List
Combine predictions with
context and interventions
to produce actionable
insight.
Surface the
insight in a
visualization
within user
workflow.
STEP4a
37. Surface the Insight and Guidance
Part of the workflow.
Puts risk into
context and helps
to interpret the
score.
Provides guidance
based on risk and
needed interventions.
Provides enough
information to
take action.
TEP4b
38. Poll Question #3
What is the number one impediment to using ML and associated risk scores in your
organization? 200 respondents
a) Lack of data and/or tools – 25%
b) Lack of clinician support – 7%
c) Lack of skilled employees – 29%
d) Lack of executive support – 19%
e) Unsure or not applicable – 19%
39. Lessons Learned
1. Worklists are a great place to start your ML work
2. Most worklists are appropriate for ML, but not all
3. Adoption is likely to lag unless insights are surfaced
within workflows (compared to sending the user to an
additional application).
4. Get frequent feedback from users throughout the
entire model development and implementation. You
need a champion!