Aaron Roth is an Associate Professor of Computer and Information Sciences at the University of Pennsylvania, affiliated with the Warren Center for Network and Data Science, and co-director of the Networked and Social Systems Engineering (NETS) program. Previously, he received his PhD from Carnegie Mellon University and spent a year as a postdoctoral researcher at Microsoft Research New England. He is the recipient of a Presidential Early Career Award for Scientists and Engineers (PECASE) awarded by President Obama in 2016, an Alfred P. Sloan Research Fellowship, an NSF CAREER award, and a Yahoo! ACE award. His research focuses on the algorithmic foundations of data privacy, algorithmic fairness, game theory and mechanism design, learning theory, and the intersections of these topics. Together with Cynthia Dwork, he is the author of the book βThe Algorithmic Foundations of Differential Privacy.β
Abstract Summary:
Differential Privacy and Machine Learning:
In this talk, we will give a friendly introduction to Differential Privacy, a rigorous methodology for analyzing data subject to provable privacy guarantees, that has recently been widely deployed in several settings. The talk will specifically focus on the relationship between differential privacy and machine learning, which is surprisingly rich. This includes both the ability to do machine learning subject to differential privacy, and tools arising from differential privacy that can be used to make learning more reliable and robust (even when privacy is not a concern).
2. Protecting Privacy is Important
Class action lawsuit accuses AOL of violating the Electronic Communications
Privacy Act, seeks $5,000 in damages per user. AOLβs director of research is fired.
3. Protecting Privacy is Important
Class action lawsuit (Doe v. Netflix) accuses Netflix of violating the Video Privacy
Protection Act, seeks $2,000 in compensation for each of Netflixβs 2,000,000
subscribers. Settled for undisclosed sum, 2nd Netflix Challenge is cancelled.
4. Protecting Privacy is Important
The National Human Genome Research Institute (NHGRI) immediately restricted
pooled genomic data that had previously been publically available.
6. So What is Differential Privacy?
β’ Differential Privacy is about promising people freedom from
harm.
βAn analysis of a dataset D is differentially private if the data analyst
knows almost no more about Alice after the analysis than he would
have known had he conducted the same analysis on an identical data
set with Aliceβs data removed.β
8. Differential Privacy
π: The data universe.
π· β π: The dataset (one element per person)
Definition: Two datasets π·, π·β²
β π are
neighbors if they differ in the data of a single
individual.
9. Differential Privacy
π: The data universe.
π· β π: The dataset (one element per person)
Definition: An algorithm π is π-differentially
private if for all pairs of neighboring datasets
π·, π·β²
, and for all outputs x:
Pr π π· = π₯ β€ (1 + π) Pr π π·β²
= π₯
10. Some Useful Properties
Theorem (Postprocessing): If π(π·) is π-private, and π is any
(randomized) function, then π(π π· ) is π-private.
11. Soβ¦
π₯ =
Definition: An algorithm π is π-differentially
private if for all pairs of neighboring datasets
π·, π·β²
, and for all outputs x:
Pr π π· = π₯ β€ (1 + π) Pr π π·β²
= π₯
12. Soβ¦
π₯ =
Definition: An algorithm π is π-differentially
private if for all pairs of neighboring datasets
π·, π·β²
, and for all outputs x:
Pr π π· = π₯ β€ (1 + π) Pr π π·β²
= π₯
13. Soβ¦
π₯ =
Definition: An algorithm π is π-differentially
private if for all pairs of neighboring datasets
π·, π·β²
, and for all outputs x:
Pr π π· = π₯ β€ (1 + π) Pr π π·β²
= π₯
14. Some Useful Properties
Theorem (Composition): If π1, β¦ , π π
are π-private, then:
π π· β‘ (π1 π· , β¦ , π π π· )
is ππ-private.
15. Soβ¦
You can go about designing algorithms as you normally would.
Just access the data using differentially private βsubroutinesβ,
and keep track of your βprivacy budgetβ as a resource.
Private algorithm design, like regular algorithm design, can be
modular.
16. Some simple operations:
Answering Numeric Queries
Def: A numeric function π has sensitivity π if for all neighboring
π·, π·β²
:
π π· β π π·β²
β€ π
Write π π β‘ π
β’ e.g. βHow many software engineers are in the room?β has
sensitivity 1.
β’ βWhat fraction of people in the room are software engineers?β
has sensitivity
1
π
.
17. Some simple operations:
Answering Numeric Queries
The Laplace Mechanism:
ππΏππ π·, π, π = π π· + πΏππ
π π
π
Theorem: ππΏππ(β , π, π) is π-private.
18. Some simple operations:
Answering Numeric Queries
The Laplace Mechanism:
ππΏππ π·, π, π = π π· + πΏππ
π π
π
Theorem: The expected error is
π π
π
(can answer βwhat fraction of people in the room are software
engineers?β with error 0.2%)
19. Some simple operations:
Answering Non-numeric Queries
βWhat is the modal eye color in the room?β
π = {Blue, Green, Brown, Red}
β’ If you can define a function that determines how βgoodβ each
outcome is for a fixed input:
β E.g.
π π·, Red =βfraction of people in D with red eyesβ
20. Some simple operations:
Answering Non-numeric Queries
π πΈπ₯π(π·, π , π, π):
Output π β π w.p. β π2πβ π π·,π
Theorem: π πΈπ₯π(π·, π , π, π) is π π β π-private, and outputs π β π such
that:
πΈ π π·, π β max
πββπ
π π·, πβ
β€
2π π
π
β ln π
(can find a color that has frequency within 0.5% of the modal color in
the room)
21. So what can we do with that?
Empirical Risk Minimization:
*i.e. almost all of supervised learning
Find π to minimize:
πΏ π =
π=1
π
β π, π₯π, π¦π
22. So what can we do with that?
Empirical Risk Minimization:
Simple Special Case: Linear Regression
Find π β π π
to minimize:
π=1
π
π, π₯π β π¦π
2
23. So what can we do with that?
β’ First Attempt: Try the exponential mechanism!
β’ Define π π, π· = β ππ β π¦ π
(ππ β π¦)
β’ Output π β β π
w.p. β π2ππ(π,π·)
β How?
β’ In this case, reduces to:
Output πβ
+ π 0,
π π π
β1
2π
Where πβ
= arg min π=1
π
π, π₯π β π¦π
2
24. So what can we do with that?
Empirical Risk Minimization:
The General Case (e.g. deep learning)
Find π to minimize:
πΏ π =
π=1
π
β π, π₯π, π¦π
25. Stochastic Gradient Descent
Convergence depends on the fact that at each round: πΌ ππ‘ = π»L(π)
Let π1 = 0 π
For π‘ = 1 to π:
Pick π at random. Let ππ‘ β π»β π π‘, π₯π, π¦π
Let π π‘+1 β π π‘ β π β ππ‘
26. Private Stochastic Gradient Descent
Still have: πΌ ππ‘ = π»L π !
(Can still prove convergence theorems, and run the algorithmβ¦)
Privacy guarantees can be computed from:
1) The privacy of the Laplace mechanism
2) Preservation of privacy under post-processing, and
3) Composition of privacy guarantees.
Let π1 = 0 π
For π‘ = 1 to π:
Pick π at random. Let ππ‘ β π»β π π‘, π₯π, π¦π + πΏππ π π
Let π π‘+1 β π π‘ β π β ππ‘
27. What else can we do?
β’ Statistical Estimation
β’ Graph Analysis
β’ Combinatorial Optimization
β’ Spectral Analysis of Matrices
β’ Anomaly Detection/Analysis of Data Streams
β’ Convex Optimization
β’ Equilibrium computation
β’ Computation of optimal 1-sided and 2-sided matchings
β’ Pareto Optimal Exchanges
β’ β¦
28. Differential Privacy β Learning
Theorem*: An π-differentially private algorithm cannot overfit its
training set by more than π.
*Lots of interesting details missing! See our paper in Science.
29. from numpy import *
def Thresholdout(sample, holdout, q, sigma, threshold):
sample_mean = mean([q(x) for x in sample])
holdout_mean = mean([q(x) for x in holdout])
if (abs(sample_mean - holdout_mean)
< random.normal(threshold, sigma)):
# q does not overfit
return sample_mean
else:
# q overfits
return holdout_mean + random.normal(0, sigma)
thresholdout.py:
Thresholdout [DFHPRR15]
30. Reusable holdout example
β’ Data set with 2n = 20,000 rows and d = 10,000
variables. Class labels in {-1,1}
β’ Analyst performs stepwise variable selection:
1. Split data into training/holdout of size n
2. Select βbestβ k variables on training data
3. Only use variables also good on holdout
4. Build linear predictor out of k variables
5. Find best k = 10,20,30,β¦
31. Classification after feature selection
No signal: data are random gaussians
labels are drawn independently at random from {-1,1}
Thresholdout correctly detects overfitting!
32. Strong signal: 20 features are mildly correlated with target
remaining attributes are uncorrelated
Thresholdout correctly detects right model size!
Classification after feature selection
33. Soβ¦
β’ Differential privacy provides:
β A rigorous, provable guarantee with a strong privacy semantics.
β A set of tools and composition theorems that allow for modular, easy
design of privacy preserving algorithms.
β Protection against overfitting even when privacy is not a concern.
34. Thanks!
To learn more:
β’ Our textbook on differential privacy:
β Available for free on my website: http://www.cis.upenn.edu/~aaroth
β’ Connections between Privacy and Overfitting:
β Dwork, Feldman, Hardt, Pitassi, Reingold, Roth, βThe Reusable Holdout: Preserving Validity in Adaptive Data Analysisβ,
Science, August 7 2015.
β Dwork, Feldman, Hardt, Pitassi, Reingold, Roth, βPreserving Statistical Validity in Adaptive Data Analysisβ, STOC 2015.
β Bassily, Nissim, Stemmer, Smith, Steinke, Ullman, βAlgorithmic Stability for Adaptive Data Analysisβ, STOC 2016.
β Rogers, Roth, Smith, Thakkar, βMax Information, Differential Privacy, and Post-Selection Hypothesis Testingβ, FOCS 2016.
β Cummings, Ligett, Nissim, Roth, Wu, βAdaptive Learning with Robust Generalization Guaranteesβ, COLT 2016.