SlideShare a Scribd company logo
1 of 65
Download to read offline
Convergence of ABC methods
Christian P. Robert
Universit´e Paris-Dauphine, University of Warwick, & IUF
joint work with D. Frazier, G. Martin & J. Rousseau
Outline
1 Approximate Bayesian computation
2 [some] asymptotics of ABC
Approximate Bayesian computation
1 Approximate Bayesian computation
ABC basics
2 [some] asymptotics of ABC
Untractable likelihoods
Cases when the likelihood function
f (y|θ) is unavailable and when the
completion step
f (y|θ) =
Z
f (y, z|θ) dz
is impossible or too costly because of
the dimension of z
c MCMC cannot be implemented!
Untractable likelihoods
Cases when the likelihood function
f (y|θ) is unavailable and when the
completion step
f (y|θ) =
Z
f (y, z|θ) dz
is impossible or too costly because of
the dimension of z
c MCMC cannot be implemented!
Illustration
Example (Ising & Potts models)
Potts model: if y takes values on a grid Y of size kn and
f (y|θ) ∝ exp θ
l∼i
Iyl =yi
where l∼i denotes a neighbourhood relation, n moderately large
prohibits the computation of the normalising constant Zθ
Illustration
Example (Ising & Potts models)
Potts model: if y takes values on a grid Y of size kn and
f (y|θ) ∝ exp θ
l∼i
Iyl =yi
where l∼i denotes a neighbourhood relation, n moderately large
prohibits the computation of the normalising constant Zθ
Special case of the intractable normalising constant, making the
likelihood impossible to compute
The ABC method
Bayesian setting: target is π(θ)f (x|θ)
The ABC method
Bayesian setting: target is π(θ)f (x|θ)
When likelihood f (x|θ) not in closed form, likelihood-free rejection
technique:
The ABC method
Bayesian setting: target is π(θ)f (x|θ)
When likelihood f (x|θ) not in closed form, likelihood-free rejection
technique:
ABC algorithm
For an observation y ∼ f (y|θ), under the prior π(θ), keep jointly
simulating
θ ∼ π(θ) , z ∼ f (z|θ ) ,
until the auxiliary variable z is equal to the observed value, z = y.
[Tavar´e et al., 1997]
Why does it work?!
The proof is trivial:
f (θi ) ∝
z∈D
π(θi )f (z|θi )Iy(z)
∝ π(θi )f (y|θi )
= π(θi |y) .
[Accept–Reject 101]
A as A...pproximative
When y is a continuous random variable, equality z = y is replaced
with a tolerance condition,
(y, z) ≤
where is a distance
A as A...pproximative
When y is a continuous random variable, equality z = y is replaced
with a tolerance condition,
(y, z) ≤
where is a distance
Output distributed from
π(θ) Pθ{ (y, z) < } ∝ π(θ| (y, z) < )
[Pritchard et al., 1999]
ABC algorithm
Algorithm 1 Likelihood-free rejection sampler 2
for i = 1 to N do
repeat
generate θ from the prior distribution π(·)
generate z from the likelihood f (·|θ )
until ρ{η(z), η(y)} ≤
set θi = θ
end for
where η(y) defines a (not necessarily sufficient) statistic
Output
The likelihood-free algorithm samples from the marginal in z of:
π (θ, z|y) =
π(θ)f (z|θ)IA ,y (z)
A ,y×Θ π(θ)f (z|θ)dzdθ
,
where A ,y = {z ∈ D|ρ(η(z), η(y)) < }.
Output
The likelihood-free algorithm samples from the marginal in z of:
π (θ, z|y) =
π(θ)f (z|θ)IA ,y (z)
A ,y×Θ π(θ)f (z|θ)dzdθ
,
where A ,y = {z ∈ D|ρ(η(z), η(y)) < }.
The idea behind ABC is that the summary statistics coupled with a
small tolerance should provide a good approximation of a posterior
distribution:
π (θ|y) = π (θ, z|y)dz ≈ π(θ|η(y)) .
MA example
Back to the MA(q) model
xt = t +
q
i=1
ϑi t−i
Simple prior: uniform over the inverse [real and complex] roots in
Q(u) = 1 −
q
i=1
ϑi ui
under the identifiability conditions
MA example
Back to the MA(q) model
xt = t +
q
i=1
ϑi t−i
Simple prior: uniform prior over the identifiability zone, e.g.
triangle for MA(2)
MA example (2)
ABC algorithm thus made of
1 picking a new value (ϑ1, ϑ2) in the triangle
2 generating an iid sequence ( t)−q<t≤T
3 producing a simulated series (xt)1≤t≤T
MA example (2)
ABC algorithm thus made of
1 picking a new value (ϑ1, ϑ2) in the triangle
2 generating an iid sequence ( t)−q<t≤T
3 producing a simulated series (xt)1≤t≤T
Distance: basic distance between the series
ρ((xt)1≤t≤T , (xt)1≤t≤T ) =
T
t=1
(xt − xt)2
or distance between summary statistics like the q autocorrelations
τj =
T
t=j+1
xtxt−j
Comparison of distance impact
Evaluation of the tolerance on the ABC sample against both
distances ( = 100%, 10%, 1%, 0.1%) for an MA(2) model
Comparison of distance impact
0.0 0.2 0.4 0.6 0.8
01234
θ1
−2.0 −1.0 0.0 0.5 1.0 1.5
0.00.51.01.5
θ2
Evaluation of the tolerance on the ABC sample against both
distances ( = 100%, 10%, 1%, 0.1%) for an MA(2) model
Comparison of distance impact
0.0 0.2 0.4 0.6 0.8
01234
θ1
−2.0 −1.0 0.0 0.5 1.0 1.5
0.00.51.01.5
θ2
Evaluation of the tolerance on the ABC sample against both
distances ( = 100%, 10%, 1%, 0.1%) for an MA(2) model
ABC as knn
[Biau et al., 2013, Annales de l’IHP]
Practice of ABC: determine tolerance as a quantile on observed
distances, say 10% or 1% quantile,
= N = qα(d1, . . . , dN)
ABC as knn
[Biau et al., 2013, Annales de l’IHP]
Practice of ABC: determine tolerance as a quantile on observed
distances, say 10% or 1% quantile,
= N = qα(d1, . . . , dN)
• Interpretation of ε as nonparametric bandwidth only
approximation of the actual practice
[Blum & Fran¸cois, 2010]
ABC as knn
[Biau et al., 2013, Annales de l’IHP]
Practice of ABC: determine tolerance as a quantile on observed
distances, say 10% or 1% quantile,
= N = qα(d1, . . . , dN)
• Interpretation of ε as nonparametric bandwidth only
approximation of the actual practice
[Blum & Fran¸cois, 2010]
• ABC is a k-nearest neighbour (knn) method with kN = N N
[Loftsgaarden & Quesenberry, 1965]
ABC consistency
Provided
kN/ log log N −→ ∞ and kN/N −→ 0
as N → ∞, for almost all s0 (with respect to the distribution of
S), with probability 1,
1
kN
kN
j=1
ϕ(θj ) −→ E[ϕ(θj )|S = s0]
[Devroye, 1982]
ABC consistency
Provided
kN/ log log N −→ ∞ and kN/N −→ 0
as N → ∞, for almost all s0 (with respect to the distribution of
S), with probability 1,
1
kN
kN
j=1
ϕ(θj ) −→ E[ϕ(θj )|S = s0]
[Devroye, 1982]
Biau et al. (2013) also recall pointwise and integrated mean square error
consistency results on the corresponding kernel estimate of the
conditional posterior distribution, under constraints
kN → ∞, kN /N → 0, hN → 0 and hp
N kN → ∞,
Rates of convergence
Further assumptions (on target and kernel) allow for precise
(integrated mean square) convergence rates (as a power of the
sample size N), derived from classical k-nearest neighbour
regression, like
• when m = 1, 2, 3, kN ≈ N(p+4)/(p+8) and rate N
− 4
p+8
• when m = 4, kN ≈ N(p+4)/(p+8) and rate N
− 4
p+8 log N
• when m > 4, kN ≈ N(p+4)/(m+p+4) and rate N
− 4
m+p+4
[Biau et al., 2013]
Rates of convergence
Further assumptions (on target and kernel) allow for precise
(integrated mean square) convergence rates (as a power of the
sample size N), derived from classical k-nearest neighbour
regression, like
• when m = 1, 2, 3, kN ≈ N(p+4)/(p+8) and rate N
− 4
p+8
• when m = 4, kN ≈ N(p+4)/(p+8) and rate N
− 4
p+8 log N
• when m > 4, kN ≈ N(p+4)/(m+p+4) and rate N
− 4
m+p+4
[Biau et al., 2013]
Drag: Only applies to sufficient summary statistics
[some] asymptotics of ABC
1 Approximate Bayesian computation
2 [some] asymptotics of ABC
asymptotic setup
consistency of ABC posteriors
asymptotic shape of posterior
distribution
asymptotic behaviour of EABC [θ]
asymptotic setup
• asymptotic: y = y(n) ∼ Pn
θ and = n, n → +∞
• parametric: θ ∈ Rk, k fixed
• concentration of summary statistics η(zn):
∃b : θ → b(θ) η(zn
) − b(θ) = oP
θ
(1), ∀θ
asymptotic setup
• asymptotic: y = y(n) ∼ Pn
θ and = n, n → +∞
• parametric: θ ∈ Rk, k fixed
• concentration of summary statistics η(zn):
∃b : θ → b(θ) η(zn
) − b(θ) = oP
θ
(1), ∀θ
Objects of interest:
• posterior concentration and asymptotic shape of π (·|η(y(n)))
(normality?)
• convergence of the posterior mean ˆθ = EABC[θ|η(y(n))]
• asymptotic acceptance rate
[Frazier et al., 2016]
consistency of ABC posteriors
ABC algorithm Bayesian consistent at θ0 if for any δ > 0,
P ( θ − θ0 > δ| η(y) − η(z) ≤ ε) → 0
as n → +∞, ε → 0
Bayesian consistency implies that sets containing θ0 have posterior
probability tending to one as n → +∞, with implication being the
existence of a specific rate of concentration
consistency of ABC posteriors
ABC algorithm Bayesian consistent at θ0 if for any δ > 0,
P ( θ − θ0 > δ| η(y) − η(z) ≤ ε) → 0
as n → +∞, ε → 0
• Concentration around true value and Bayesian consistency
impose less stringent conditions on the convergence speed of
tolerance n to zero, when compared with asymptotic
normality of ABC posterior
• asymptotic normality of ABC posterior mean does not require
asymptotic normality of ABC posterior
consistency of ABC posteriors
• Concentration of summary η(z): there exists b(θ) such that
η(z) − b(θ) = oP
θ
(1)
• Consistency:
Π n ( θ − θ0 ≤ δ|η(y)) = 1 + op(1)
• Convergence rate: there exists δn = o(1) such that
Π n ( θ − θ0 ≤ δn|η(y)) = 1 + op(1)
consistency of ABC posteriors
• Consistency:
Π n ( θ − θ0 ≤ δ|η(y)) = 1 + op(1)
• Convergence rate: there exists δn = o(1) such that
Π n ( θ − θ0 ≤ δn|η(y)) = 1 + op(1)
• Point estimator consistency
ˆθ = EABC [θ|η(y(n)
)], EABC [θ|η(y(n)
)] − θ0 = op(1)
vn(EABC [θ|η(y(n)
)] − θ0) ⇒ N(0, v)
Rate of convergence
Π (·| η(y) − η(z) ≤ ε) concentrates at rate λn → 0 if
lim sup
ε→0
lim sup
n→+∞
Π ( θ − θ0 > λnM| η(y)η(z) ≤ ε) → 0
in P0-probability when M goes to infinity.
Rate of convergence
Π (·| η(y) − η(z) ≤ ε) concentrates at rate λn → 0 if
lim sup
ε→0
lim sup
n→+∞
Π ( θ − θ0 > λnM| η(y)η(z) ≤ ε) → 0
in P0-probability when M goes to infinity.
Posterior rate of concentration related to rate at which information
accumulates about true parameter vector
Related results
existing studies on the large sample properties of ABC, in which
the asymptotic properties of point estimators derived from ABC
have been the primary focus
[Creel et al., 2015; Jasra, 2015; Li & Fearnhead, 2015]
Convergence when n σn
Under (main) assumptions
(A1) ∃σn → +∞
Pθ σ−1
n η(z) − b(θ) > u ≤ c(θ)h(u), lim
u→+∞
h(u) = 0
(A2)
Π( b(θ) − b(θ0) ≤ u) uD
, u ≈ 0
posterior consistency
posterior concentration rate λn that depends on the deviation
control of d2{η(z), b(θ)}
posterior concentration rate for b(θ) bounded from below by O( n)
Convergence when n σn
Under (main) assumptions
(A1) ∃σn → +∞
Pθ σ−1
n η(z) − b(θ) > u ≤ c(θ)h(u), lim
u→+∞
h(u) = 0
(A2)
Π( b(θ) − b(θ0) ≤ u) uD
, u ≈ 0
then
Π n b(θ) − b(θ0) n + σnh−1
( D
n )|η(y) = 1 + op0 (1)
If also θ − θ0 ≤ L b(θ) − c(θ0) α, locally and θ → b(θ) 1-1
Π n ( θ − θ0
α
n + σα
n (h−1
( D
n ))α
δn
|η(y)) = 1 + op0 (1)
Comments
(A1) : if Pθ σ−1
n η(z) − b(θ) > u ≤ c(θ)h(u), two cases
1 Polynomial tail: h(u) u−κ
, then δn = n + σn
−D/κ
n
2 Exponential tail: h(u) e−cu
, then δn = n + σn log(1/ n)
Comments
(A1) : if Pθ σ−1
n η(z) − b(θ) > u ≤ c(θ)h(u), two cases
1 Polynomial tail: h(u) u−κ
, then δn = n + σn
−D/κ
n
2 Exponential tail: h(u) e−cu
, then δn = n + σn log(1/ n)
• E.g., η(y) = n−1
i g(yi ) with moments on g (case 1) or
Laplace transform (case 2)
Comments
(A2) : Π( b(θ) − b(θ0) ≤ u) uD : If Π regular enough then
D = dim(θ)
• no need to approximate the density f (η(y)|θ).
• Same results holds when n = o(σn) if (A1) replaced with
inf
|x|≤M
Pθ σ−1
n (η(z) − b(θ)) − x ≤ u uD
, u ≈ 0
proof
Simple enough proof: assume σn ≤ δ n and
η(y) − b(θ0) σn, η(y) − η(z) ≤ n
Hence
b(θ) − b(θ0) > δn ⇒ η(z) − b(θ) > δn − n − σn := tn
proof
Simple enough proof: assume σn ≤ δ n and
η(y) − b(θ0) σn, η(y) − η(z) ≤ n
Hence
b(θ) − b(θ0) > δn ⇒ η(z) − b(θ) > δn − n − σn := tn
Also, if b(θ) − b(θ0) ≤ n/3
η(y) − η(z) ≤ η(z) − b(θ) + σn
≤ n/3
+ n/3
and
Π n ( b(θ) − b(θ0) > δn|y) ≤
b(θ)−b(θ0) >δn
Pθ ( η(z) − b(θ) > tn) dΠ(θ)
|b(θ)−b(θ0)|≤ n/3
Pθ ( η(z) − b(θ) ≤ n/3) dΠ(θ)
−D
n h(tnσ−1
n )
Θ
c(θ)dΠ(θ)
Summary statistic and (in)consistency
Consider the moving average MA(2) model
yt = et + θ1et−1 + θ2et−2, et ∼i.i.d. N(0, 1)
and
−2 ≤ θ1 ≤ 2, θ1 + θ2 ≥ −1, θ1 − θ2 ≤ 1.
Summary statistic and (in)consistency
Consider the moving average MA(2) model
yt = et + θ1et−1 + θ2et−2, et ∼i.i.d. N(0, 1)
and
−2 ≤ θ1 ≤ 2, θ1 + θ2 ≥ −1, θ1 − θ2 ≤ 1.
summary statistics equal to sample autocovariances
ηj (y) = T−1
T
t=1+j
yt yt−j j = 0, 1
with
η0(y)
P
→ E[y2
t ] = 1 + (θ01)2
+ (θ02)2
and η1(y)
P
→ E[yt yt−1] = θ01(1 + θ02)
For ABC target pε (θ|η(y)) to be degenerate at θ0
0 = b(θ0) − b (θ) =
1 + (θ01)2
+ (θ02)2
θ01(1 + θ02)
−
1 + (θ1)2
+ (θ2)2
θ1(1 + θ2)
must have unique solution θ = θ0
Take θ01 = .6, θ02 = .2: equation has two solutions
θ1 = .6, θ2 = .2 and θ1 ≈ .5453, θ2 ≈ .3204
asymptotic shape of posterior distribution
Shape of
Π (·| η(y), η(z) ≤ εn)
for several connections between εn and rate at which η(yn) satisfy
CLT
asymptotic shape of posterior distribution
Shape of
Π (·| η(y), η(z) ≤ εn)
for several connections between εn and rate at which η(yn) satisfy
CLT
Three different regimes:
1 σn = o( n) −→ Uniform limit
2 σn n −→ perturbated Gaussian limit
3 σn n −→ Gaussian limit
scaling matrices
Introduction of sequence of (k, k) p.d. matrices Σn(θ) such that
for all θ near θ0
c1 Dn ∗ ≤ Σn(θ) ∗ ≤ c2 Dn ∗, Dn = diag(dn(1), · · · , dn(k)),
with 0 < c1, c2 < +∞, dn(j) → +∞ for all j
Possibly different convergence rates for components of η(z)
Reordering components so that
dn(1) ≤ · · · ≤ dn(k)
with assumption that
lim inf
n
dn(j)εn = lim sup
n
dn(j)εn
new assumptions
(B1) Concentration of summary η: Σn(θ) ∈ Rk1×k1 is o(1)
Σn(θ)−1
{η(z)−b(θ)} ⇒ Nk1 (0, Id), (Σn(θ)Σn(θ0)−1
)n = Co
(B2) b(θ) is C1 and
θ − θ0 b(θ) − b(θ0)
(B3) Dominated convergence and
lim
n
Pθ(Σn(θ)−1{η(z) − b(θ)} ∈ u + B(0, un))
j un(j)
= ϕ(u)
main result
Set Σn(θ) = σnD(θ) for θ ≈ θ0 and
Zo = Σn(θ0)−1(η(y) − b(θ0)), then under (B1) and (B2)
• when nσ−1
n → +∞
Π n [ −1
n (θ−θ0) ∈ A|y] ⇒ UB0 (A), B0 = {x ∈ Rk
; b (θ0)T
x ≤ 1}
main result
Set Σn(θ) = σnD(θ) for θ ≈ θ0 and
Zo = Σn(θ0)−1(η(y) − b(θ0)), then under (B1) and (B2)
• when nσ−1
n → c
Π n [Σn(θ0)−1
(θ − θ0) − Zo
∈ A|y] ⇒ Qc(A), Qc = N
main result
Set Σn(θ) = σnD(θ) for θ ≈ θ0 and
Zo = Σn(θ0)−1(η(y) − b(θ0)), then under (B1) and (B2)
• when nσ−1
n → 0 and (B3) holds, set
Vn = [b (θ0)]n
Σn(θ0)b (θ0)
then
Π n [V −1
n (θ − θ0) − ˜Zo
∈ A|y] ⇒ Φ(A),
intuition (?!)
Set x(θ) = σ−1
n (θ − θ0) − Zo (k = 1)
πn := Π n [ −1
n (θ − θ0) ∈ A|y]
=
|θ−θ0|≤un
Ix(θ)∈A
Pθ σ−1
n (η(z) − b(θ)) + x(θ) ≤ σ−1
n n p(θ)dθ
|θ−θ0|≤un
Pθ σ−1
n (η(z) − b(θ)) + x(θ) ≤ σ−1
n n p(θ)dθ
+ op(1)
• If n/σn 1 :
Pθ σ−1
n (η(z) − b(θ)) + x(θ) ≤ σ−1
n n = 1+o(1), iff x ≤ σ−1
n n+o(1)
• If n/σn = o(1)
Pθ σ−1
n (η(z) − b(θ)) + x ≤ σ−1
n n = φ(x)σn(1 + o(1))
more comments
• Surprising : U(− n, n) limit when n σn
more comments
• Surprising : U(− n, n) limit when n σn but not that
surprising since n = o(1) means concentration around θ0 and
σn = o( n) implies that b(θ) − b(θ0) ≈ η(z) − η(y)
• again, no need to control approximation of f (η(y)|θ) by a
Gaussian density: merely a control of the distribution
• generalisation to the case where eigenvalues of Σn are
dn,1 = · · · = dn,k
• behaviour of EABC (θ|y) consistent with Li & Fearnhead
(2016)
even more comments
If (also) p(θ) is H¨older β
EABC (θ|y) − θ0 = σn
Zo
b(θ0)
score for f (η(y)|θ)
+
β/2
j=1
2j
n Hj (θ0, p, b)
bias from threshold approx
+o(σn) + O( β+1
n )
with
• if 2
n = o(σn) : Efficiency
EABC (θ|y) − θ0 = σn
Zo
b(θ0)
+ o(σn)
• the Hj (θ0, p, b)’s are deterministic
we gain nothing by getting a first crude ˆθ(y) = EABC (θ|y)
for some η(y) and then rerun ABC with ˆθ(y)
asymptotic behaviour of EABC [θ]
When p = dim(η(y)) = d = dim(θ) and n = o(n−3/10)
EABC [dT (θ − θ0)|yo
] ⇒ N(0, ( bo
)T
Σ−1
bo
−1
[Li & Fearnhead (2016)]
In fact, if β+1
n
√
n = o(1), with β H¨older-smoothness of π
EABC [(θ−θ0)|yo
] =
( bo)−1Zo
√
n
+
k
j=1
hj (θ0) 2j
n +op(1), 2k = β
asymptotic behaviour of EABC [θ]
When p = dim(η(y)) = d = dim(θ) and n = o(n−3/10)
EABC [dT (θ − θ0)|yo
] ⇒ N(0, ( bo
)T
Σ−1
bo
−1
[Li & Fearnhead (2016)]
In fact, if β+1
n
√
n = o(1), with β H¨older-smoothness of π
EABC [(θ−θ0)|yo
] =
( bo)−1Zo
√
n
+
k
j=1
hj (θ0) 2j
n +op(1), 2k = β
Iterating for fixed p mildly interesting: if
˜η(y) = EABC [θ|yo
]
then
EABC [θ|˜η(y)] = θ0 +
( bo)−1Zo
√
n
+
π (θ0)
π(θ0)
2
n + o()
[Fearnhead & Prangle, 2012]
more asymptotic behaviour of EABC [θ]
Li and Fearnhead (2016,2017) consider that
EABC [dT (θ − θ0)|yo
]
not optimal when p > d
• If
√
n 2
n = o(1) and n
√
n = o(1)
√
n[EABC (θ) − θ0] = P bo Zo
+ op(1)
Zo
=
√
n(η(y) − bo
)
P bo Zo
= (( bo
)T
bo
)−1
( bo
)T
Zo
and Vas(P bo Zo
) ≥ ( bo
)T
Vas(Zo
)−1
( bo
)
−1
• If n
√
n = o(1)
√
n[EABC (θ)−θ0] = ( bo
)T
Σ−1
bo
−1
( bo
)T
Σ−1
Zo
+op(1)
impact of the dimension of η
dimension of η(.) does not impact above result, but impacts
acceptance probability
• if n = o(σn), k1 = dim(η(y)), k = dim(θ) & k1 ≥ k
αn := Pr ( y − z ≤ n) k1
n σ−k1+k
n
• if n σn
αn := Pr ( y − z ≤ n) k
n
• If we choose αn
• αn = o(σk
n ) leads to n = σn(αnσ−k
n )1/k1
= o(σn)
• αn σn leads to n α
1/k
n .
conclusion on ABC consistency
• asymptotic description of ABC: different regimes depending
on n & σn
• no point in choosing n arbitrarily small: just n = o(σn)
• no asymptotic gain in iterative ABC
• results under weak conditions by not studying g(η(z)|θ)

More Related Content

What's hot

Monte Carlo in Montréal 2017
Monte Carlo in Montréal 2017Monte Carlo in Montréal 2017
Monte Carlo in Montréal 2017Christian Robert
 
Bayesian inference on mixtures
Bayesian inference on mixturesBayesian inference on mixtures
Bayesian inference on mixturesChristian Robert
 
ABC short course: model choice chapter
ABC short course: model choice chapterABC short course: model choice chapter
ABC short course: model choice chapterChristian Robert
 
Laplace's Demon: seminar #1
Laplace's Demon: seminar #1Laplace's Demon: seminar #1
Laplace's Demon: seminar #1Christian Robert
 
ABC short course: final chapters
ABC short course: final chaptersABC short course: final chapters
ABC short course: final chaptersChristian Robert
 
ABC convergence under well- and mis-specified models
ABC convergence under well- and mis-specified modelsABC convergence under well- and mis-specified models
ABC convergence under well- and mis-specified modelsChristian Robert
 
CISEA 2019: ABC consistency and convergence
CISEA 2019: ABC consistency and convergenceCISEA 2019: ABC consistency and convergence
CISEA 2019: ABC consistency and convergenceChristian Robert
 
An overview of Bayesian testing
An overview of Bayesian testingAn overview of Bayesian testing
An overview of Bayesian testingChristian Robert
 
accurate ABC Oliver Ratmann
accurate ABC Oliver Ratmannaccurate ABC Oliver Ratmann
accurate ABC Oliver Ratmannolli0601
 
better together? statistical learning in models made of modules
better together? statistical learning in models made of modulesbetter together? statistical learning in models made of modules
better together? statistical learning in models made of modulesChristian Robert
 
Likelihood-free Design: a discussion
Likelihood-free Design: a discussionLikelihood-free Design: a discussion
Likelihood-free Design: a discussionChristian Robert
 
Can we estimate a constant?
Can we estimate a constant?Can we estimate a constant?
Can we estimate a constant?Christian Robert
 
Approximating Bayes Factors
Approximating Bayes FactorsApproximating Bayes Factors
Approximating Bayes FactorsChristian Robert
 
NBBC15, Reyjavik, June 08, 2015
NBBC15, Reyjavik, June 08, 2015NBBC15, Reyjavik, June 08, 2015
NBBC15, Reyjavik, June 08, 2015Christian Robert
 

What's hot (20)

ABC-Gibbs
ABC-GibbsABC-Gibbs
ABC-Gibbs
 
Monte Carlo in Montréal 2017
Monte Carlo in Montréal 2017Monte Carlo in Montréal 2017
Monte Carlo in Montréal 2017
 
Bayesian inference on mixtures
Bayesian inference on mixturesBayesian inference on mixtures
Bayesian inference on mixtures
 
ABC short course: model choice chapter
ABC short course: model choice chapterABC short course: model choice chapter
ABC short course: model choice chapter
 
Laplace's Demon: seminar #1
Laplace's Demon: seminar #1Laplace's Demon: seminar #1
Laplace's Demon: seminar #1
 
the ABC of ABC
the ABC of ABCthe ABC of ABC
the ABC of ABC
 
ABC short course: final chapters
ABC short course: final chaptersABC short course: final chapters
ABC short course: final chapters
 
ABC convergence under well- and mis-specified models
ABC convergence under well- and mis-specified modelsABC convergence under well- and mis-specified models
ABC convergence under well- and mis-specified models
 
CISEA 2019: ABC consistency and convergence
CISEA 2019: ABC consistency and convergenceCISEA 2019: ABC consistency and convergence
CISEA 2019: ABC consistency and convergence
 
asymptotics of ABC
asymptotics of ABCasymptotics of ABC
asymptotics of ABC
 
An overview of Bayesian testing
An overview of Bayesian testingAn overview of Bayesian testing
An overview of Bayesian testing
 
accurate ABC Oliver Ratmann
accurate ABC Oliver Ratmannaccurate ABC Oliver Ratmann
accurate ABC Oliver Ratmann
 
better together? statistical learning in models made of modules
better together? statistical learning in models made of modulesbetter together? statistical learning in models made of modules
better together? statistical learning in models made of modules
 
Likelihood-free Design: a discussion
Likelihood-free Design: a discussionLikelihood-free Design: a discussion
Likelihood-free Design: a discussion
 
Can we estimate a constant?
Can we estimate a constant?Can we estimate a constant?
Can we estimate a constant?
 
Approximating Bayes Factors
Approximating Bayes FactorsApproximating Bayes Factors
Approximating Bayes Factors
 
Big model, big data
Big model, big dataBig model, big data
Big model, big data
 
Boston talk
Boston talkBoston talk
Boston talk
 
Intractable likelihoods
Intractable likelihoodsIntractable likelihoods
Intractable likelihoods
 
NBBC15, Reyjavik, June 08, 2015
NBBC15, Reyjavik, June 08, 2015NBBC15, Reyjavik, June 08, 2015
NBBC15, Reyjavik, June 08, 2015
 

Viewers also liked

Slides econometrics-2017-graduate-2
Slides econometrics-2017-graduate-2Slides econometrics-2017-graduate-2
Slides econometrics-2017-graduate-2Arthur Charpentier
 
Introduction to advanced Monte Carlo methods
Introduction to advanced Monte Carlo methodsIntroduction to advanced Monte Carlo methods
Introduction to advanced Monte Carlo methodsChristian Robert
 
Introduction to MCMC methods
Introduction to MCMC methodsIntroduction to MCMC methods
Introduction to MCMC methodsChristian Robert
 
Unbiased Bayes for Big Data
Unbiased Bayes for Big DataUnbiased Bayes for Big Data
Unbiased Bayes for Big DataChristian Robert
 
Tokyo F# meetup 14-08-03
Tokyo F# meetup 14-08-03Tokyo F# meetup 14-08-03
Tokyo F# meetup 14-08-03nrolland
 
Non-informative reparametrisation for location-scale mixtures
Non-informative reparametrisation for location-scale mixturesNon-informative reparametrisation for location-scale mixtures
Non-informative reparametrisation for location-scale mixturesChristian Robert
 
Montpellier Math Colloquium
Montpellier Math ColloquiumMontpellier Math Colloquium
Montpellier Math ColloquiumChristian Robert
 
Computational tools for Bayesian model choice
Computational tools for Bayesian model choiceComputational tools for Bayesian model choice
Computational tools for Bayesian model choiceChristian Robert
 
Approximate Bayesian computation and machine learning (BigMC 2014)
Approximate Bayesian computation and machine learning (BigMC 2014)Approximate Bayesian computation and machine learning (BigMC 2014)
Approximate Bayesian computation and machine learning (BigMC 2014)Pierre Pudlo
 
Ratio of uniforms and beyond
Ratio of uniforms and beyondRatio of uniforms and beyond
Ratio of uniforms and beyondChristian Robert
 
Omiros' talk on the Bernoulli factory problem
Omiros' talk on the  Bernoulli factory problemOmiros' talk on the  Bernoulli factory problem
Omiros' talk on the Bernoulli factory problemBigMC
 
ABC with data cloning for MLE in state space models
ABC with data cloning for MLE in state space modelsABC with data cloning for MLE in state space models
ABC with data cloning for MLE in state space modelsUmberto Picchini
 
Accelerated approximate Bayesian computation with applications to protein fol...
Accelerated approximate Bayesian computation with applications to protein fol...Accelerated approximate Bayesian computation with applications to protein fol...
Accelerated approximate Bayesian computation with applications to protein fol...Umberto Picchini
 
(Approximate) Bayesian computation as a new empirical Bayes (something)?
(Approximate) Bayesian computation as a new empirical Bayes (something)?(Approximate) Bayesian computation as a new empirical Bayes (something)?
(Approximate) Bayesian computation as a new empirical Bayes (something)?Christian Robert
 

Viewers also liked (20)

Econometrics 2017-graduate-3
Econometrics 2017-graduate-3Econometrics 2017-graduate-3
Econometrics 2017-graduate-3
 
Slides econometrics-2017-graduate-2
Slides econometrics-2017-graduate-2Slides econometrics-2017-graduate-2
Slides econometrics-2017-graduate-2
 
Introduction to advanced Monte Carlo methods
Introduction to advanced Monte Carlo methodsIntroduction to advanced Monte Carlo methods
Introduction to advanced Monte Carlo methods
 
Introduction to MCMC methods
Introduction to MCMC methodsIntroduction to MCMC methods
Introduction to MCMC methods
 
Hastings paper discussion
Hastings paper discussionHastings paper discussion
Hastings paper discussion
 
Unbiased Bayes for Big Data
Unbiased Bayes for Big DataUnbiased Bayes for Big Data
Unbiased Bayes for Big Data
 
Tokyo F# meetup 14-08-03
Tokyo F# meetup 14-08-03Tokyo F# meetup 14-08-03
Tokyo F# meetup 14-08-03
 
Non-informative reparametrisation for location-scale mixtures
Non-informative reparametrisation for location-scale mixturesNon-informative reparametrisation for location-scale mixtures
Non-informative reparametrisation for location-scale mixtures
 
Montpellier Math Colloquium
Montpellier Math ColloquiumMontpellier Math Colloquium
Montpellier Math Colloquium
 
Computational tools for Bayesian model choice
Computational tools for Bayesian model choiceComputational tools for Bayesian model choice
Computational tools for Bayesian model choice
 
Bayesian Core: Chapter 3
Bayesian Core: Chapter 3Bayesian Core: Chapter 3
Bayesian Core: Chapter 3
 
Bayesian Core: Chapter 4
Bayesian Core: Chapter 4Bayesian Core: Chapter 4
Bayesian Core: Chapter 4
 
Approximate Bayesian computation and machine learning (BigMC 2014)
Approximate Bayesian computation and machine learning (BigMC 2014)Approximate Bayesian computation and machine learning (BigMC 2014)
Approximate Bayesian computation and machine learning (BigMC 2014)
 
Ratio of uniforms and beyond
Ratio of uniforms and beyondRatio of uniforms and beyond
Ratio of uniforms and beyond
 
Desy
DesyDesy
Desy
 
R and Data Science
R and Data ScienceR and Data Science
R and Data Science
 
Omiros' talk on the Bernoulli factory problem
Omiros' talk on the  Bernoulli factory problemOmiros' talk on the  Bernoulli factory problem
Omiros' talk on the Bernoulli factory problem
 
ABC with data cloning for MLE in state space models
ABC with data cloning for MLE in state space modelsABC with data cloning for MLE in state space models
ABC with data cloning for MLE in state space models
 
Accelerated approximate Bayesian computation with applications to protein fol...
Accelerated approximate Bayesian computation with applications to protein fol...Accelerated approximate Bayesian computation with applications to protein fol...
Accelerated approximate Bayesian computation with applications to protein fol...
 
(Approximate) Bayesian computation as a new empirical Bayes (something)?
(Approximate) Bayesian computation as a new empirical Bayes (something)?(Approximate) Bayesian computation as a new empirical Bayes (something)?
(Approximate) Bayesian computation as a new empirical Bayes (something)?
 

Similar to Convergence of ABC methods

Asymptotics of ABC, lecture, Collège de France
Asymptotics of ABC, lecture, Collège de FranceAsymptotics of ABC, lecture, Collège de France
Asymptotics of ABC, lecture, Collège de FranceChristian Robert
 
Workshop in honour of Don Poskitt and Gael Martin
Workshop in honour of Don Poskitt and Gael MartinWorkshop in honour of Don Poskitt and Gael Martin
Workshop in honour of Don Poskitt and Gael MartinChristian Robert
 
ABC with Wasserstein distances
ABC with Wasserstein distancesABC with Wasserstein distances
ABC with Wasserstein distancesChristian Robert
 
Introducing Zap Q-Learning
Introducing Zap Q-Learning   Introducing Zap Q-Learning
Introducing Zap Q-Learning Sean Meyn
 
Divergence clustering
Divergence clusteringDivergence clustering
Divergence clusteringFrank Nielsen
 
ABC based on Wasserstein distances
ABC based on Wasserstein distancesABC based on Wasserstein distances
ABC based on Wasserstein distancesChristian Robert
 
Approximate Bayesian Computation with Quasi-Likelihoods
Approximate Bayesian Computation with Quasi-LikelihoodsApproximate Bayesian Computation with Quasi-Likelihoods
Approximate Bayesian Computation with Quasi-LikelihoodsStefano Cabras
 
DeepLearn2022 2. Variance Matters
DeepLearn2022  2. Variance MattersDeepLearn2022  2. Variance Matters
DeepLearn2022 2. Variance MattersSean Meyn
 
Semi-automatic ABC: a discussion
Semi-automatic ABC: a discussionSemi-automatic ABC: a discussion
Semi-automatic ABC: a discussionChristian Robert
 
Discussion of Fearnhead and Prangle, RSS&lt; Dec. 14, 2011
Discussion of Fearnhead and Prangle, RSS&lt; Dec. 14, 2011Discussion of Fearnhead and Prangle, RSS&lt; Dec. 14, 2011
Discussion of Fearnhead and Prangle, RSS&lt; Dec. 14, 2011Christian Robert
 
How many components in a mixture?
How many components in a mixture?How many components in a mixture?
How many components in a mixture?Christian Robert
 
NCE, GANs & VAEs (and maybe BAC)
NCE, GANs & VAEs (and maybe BAC)NCE, GANs & VAEs (and maybe BAC)
NCE, GANs & VAEs (and maybe BAC)Christian Robert
 
Approximate Bayesian computation for the Ising/Potts model
Approximate Bayesian computation for the Ising/Potts modelApproximate Bayesian computation for the Ising/Potts model
Approximate Bayesian computation for the Ising/Potts modelMatt Moores
 
Workshop on Bayesian Inference for Latent Gaussian Models with Applications
Workshop on Bayesian Inference for Latent Gaussian Models with ApplicationsWorkshop on Bayesian Inference for Latent Gaussian Models with Applications
Workshop on Bayesian Inference for Latent Gaussian Models with ApplicationsChristian Robert
 
Columbia workshop [ABC model choice]
Columbia workshop [ABC model choice]Columbia workshop [ABC model choice]
Columbia workshop [ABC model choice]Christian Robert
 
Testing for mixtures at BNP 13
Testing for mixtures at BNP 13Testing for mixtures at BNP 13
Testing for mixtures at BNP 13Christian Robert
 

Similar to Convergence of ABC methods (20)

Asymptotics of ABC, lecture, Collège de France
Asymptotics of ABC, lecture, Collège de FranceAsymptotics of ABC, lecture, Collège de France
Asymptotics of ABC, lecture, Collège de France
 
Intro to ABC
Intro to ABCIntro to ABC
Intro to ABC
 
Workshop in honour of Don Poskitt and Gael Martin
Workshop in honour of Don Poskitt and Gael MartinWorkshop in honour of Don Poskitt and Gael Martin
Workshop in honour of Don Poskitt and Gael Martin
 
ABC with Wasserstein distances
ABC with Wasserstein distancesABC with Wasserstein distances
ABC with Wasserstein distances
 
Introducing Zap Q-Learning
Introducing Zap Q-Learning   Introducing Zap Q-Learning
Introducing Zap Q-Learning
 
Divergence clustering
Divergence clusteringDivergence clustering
Divergence clustering
 
ABC based on Wasserstein distances
ABC based on Wasserstein distancesABC based on Wasserstein distances
ABC based on Wasserstein distances
 
Approximate Bayesian Computation with Quasi-Likelihoods
Approximate Bayesian Computation with Quasi-LikelihoodsApproximate Bayesian Computation with Quasi-Likelihoods
Approximate Bayesian Computation with Quasi-Likelihoods
 
CDT 22 slides.pdf
CDT 22 slides.pdfCDT 22 slides.pdf
CDT 22 slides.pdf
 
DeepLearn2022 2. Variance Matters
DeepLearn2022  2. Variance MattersDeepLearn2022  2. Variance Matters
DeepLearn2022 2. Variance Matters
 
Semi-automatic ABC: a discussion
Semi-automatic ABC: a discussionSemi-automatic ABC: a discussion
Semi-automatic ABC: a discussion
 
Discussion of Fearnhead and Prangle, RSS&lt; Dec. 14, 2011
Discussion of Fearnhead and Prangle, RSS&lt; Dec. 14, 2011Discussion of Fearnhead and Prangle, RSS&lt; Dec. 14, 2011
Discussion of Fearnhead and Prangle, RSS&lt; Dec. 14, 2011
 
How many components in a mixture?
How many components in a mixture?How many components in a mixture?
How many components in a mixture?
 
NCE, GANs & VAEs (and maybe BAC)
NCE, GANs & VAEs (and maybe BAC)NCE, GANs & VAEs (and maybe BAC)
NCE, GANs & VAEs (and maybe BAC)
 
Approximate Bayesian computation for the Ising/Potts model
Approximate Bayesian computation for the Ising/Potts modelApproximate Bayesian computation for the Ising/Potts model
Approximate Bayesian computation for the Ising/Potts model
 
Edinburgh, Bayes-250
Edinburgh, Bayes-250Edinburgh, Bayes-250
Edinburgh, Bayes-250
 
Workshop on Bayesian Inference for Latent Gaussian Models with Applications
Workshop on Bayesian Inference for Latent Gaussian Models with ApplicationsWorkshop on Bayesian Inference for Latent Gaussian Models with Applications
Workshop on Bayesian Inference for Latent Gaussian Models with Applications
 
Columbia workshop [ABC model choice]
Columbia workshop [ABC model choice]Columbia workshop [ABC model choice]
Columbia workshop [ABC model choice]
 
Testing for mixtures at BNP 13
Testing for mixtures at BNP 13Testing for mixtures at BNP 13
Testing for mixtures at BNP 13
 
Ab cancun
Ab cancunAb cancun
Ab cancun
 

More from Christian Robert

Inferring the number of components: dream or reality?
Inferring the number of components: dream or reality?Inferring the number of components: dream or reality?
Inferring the number of components: dream or reality?Christian Robert
 
Testing for mixtures by seeking components
Testing for mixtures by seeking componentsTesting for mixtures by seeking components
Testing for mixtures by seeking componentsChristian Robert
 
discussion on Bayesian restricted likelihood
discussion on Bayesian restricted likelihooddiscussion on Bayesian restricted likelihood
discussion on Bayesian restricted likelihoodChristian Robert
 
Coordinate sampler : A non-reversible Gibbs-like sampler
Coordinate sampler : A non-reversible Gibbs-like samplerCoordinate sampler : A non-reversible Gibbs-like sampler
Coordinate sampler : A non-reversible Gibbs-like samplerChristian Robert
 
a discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment models
a discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment modelsa discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment models
a discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment modelsChristian Robert
 
Poster for Bayesian Statistics in the Big Data Era conference
Poster for Bayesian Statistics in the Big Data Era conferencePoster for Bayesian Statistics in the Big Data Era conference
Poster for Bayesian Statistics in the Big Data Era conferenceChristian Robert
 
short course at CIRM, Bayesian Masterclass, October 2018
short course at CIRM, Bayesian Masterclass, October 2018short course at CIRM, Bayesian Masterclass, October 2018
short course at CIRM, Bayesian Masterclass, October 2018Christian Robert
 
prior selection for mixture estimation
prior selection for mixture estimationprior selection for mixture estimation
prior selection for mixture estimationChristian Robert
 
Coordinate sampler: A non-reversible Gibbs-like sampler
Coordinate sampler: A non-reversible Gibbs-like samplerCoordinate sampler: A non-reversible Gibbs-like sampler
Coordinate sampler: A non-reversible Gibbs-like samplerChristian Robert
 

More from Christian Robert (13)

discussion of ICML23.pdf
discussion of ICML23.pdfdiscussion of ICML23.pdf
discussion of ICML23.pdf
 
restore.pdf
restore.pdfrestore.pdf
restore.pdf
 
Inferring the number of components: dream or reality?
Inferring the number of components: dream or reality?Inferring the number of components: dream or reality?
Inferring the number of components: dream or reality?
 
Testing for mixtures by seeking components
Testing for mixtures by seeking componentsTesting for mixtures by seeking components
Testing for mixtures by seeking components
 
discussion on Bayesian restricted likelihood
discussion on Bayesian restricted likelihooddiscussion on Bayesian restricted likelihood
discussion on Bayesian restricted likelihood
 
ABC-Gibbs
ABC-GibbsABC-Gibbs
ABC-Gibbs
 
Coordinate sampler : A non-reversible Gibbs-like sampler
Coordinate sampler : A non-reversible Gibbs-like samplerCoordinate sampler : A non-reversible Gibbs-like sampler
Coordinate sampler : A non-reversible Gibbs-like sampler
 
eugenics and statistics
eugenics and statisticseugenics and statistics
eugenics and statistics
 
a discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment models
a discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment modelsa discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment models
a discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment models
 
Poster for Bayesian Statistics in the Big Data Era conference
Poster for Bayesian Statistics in the Big Data Era conferencePoster for Bayesian Statistics in the Big Data Era conference
Poster for Bayesian Statistics in the Big Data Era conference
 
short course at CIRM, Bayesian Masterclass, October 2018
short course at CIRM, Bayesian Masterclass, October 2018short course at CIRM, Bayesian Masterclass, October 2018
short course at CIRM, Bayesian Masterclass, October 2018
 
prior selection for mixture estimation
prior selection for mixture estimationprior selection for mixture estimation
prior selection for mixture estimation
 
Coordinate sampler: A non-reversible Gibbs-like sampler
Coordinate sampler: A non-reversible Gibbs-like samplerCoordinate sampler: A non-reversible Gibbs-like sampler
Coordinate sampler: A non-reversible Gibbs-like sampler
 

Recently uploaded

Behavioral Disorder: Schizophrenia & it's Case Study.pdf
Behavioral Disorder: Schizophrenia & it's Case Study.pdfBehavioral Disorder: Schizophrenia & it's Case Study.pdf
Behavioral Disorder: Schizophrenia & it's Case Study.pdfSELF-EXPLANATORY
 
(9818099198) Call Girls In Noida Sector 14 (NOIDA ESCORTS)
(9818099198) Call Girls In Noida Sector 14 (NOIDA ESCORTS)(9818099198) Call Girls In Noida Sector 14 (NOIDA ESCORTS)
(9818099198) Call Girls In Noida Sector 14 (NOIDA ESCORTS)riyaescorts54
 
Pests of Blackgram, greengram, cowpea_Dr.UPR.pdf
Pests of Blackgram, greengram, cowpea_Dr.UPR.pdfPests of Blackgram, greengram, cowpea_Dr.UPR.pdf
Pests of Blackgram, greengram, cowpea_Dr.UPR.pdfPirithiRaju
 
Environmental Biotechnology Topic:- Microbial Biosensor
Environmental Biotechnology Topic:- Microbial BiosensorEnvironmental Biotechnology Topic:- Microbial Biosensor
Environmental Biotechnology Topic:- Microbial Biosensorsonawaneprad
 
Topic 9- General Principles of International Law.pptx
Topic 9- General Principles of International Law.pptxTopic 9- General Principles of International Law.pptx
Topic 9- General Principles of International Law.pptxJorenAcuavera1
 
Best Call Girls In Sector 29 Gurgaon❤️8860477959 EscorTs Service In 24/7 Delh...
Best Call Girls In Sector 29 Gurgaon❤️8860477959 EscorTs Service In 24/7 Delh...Best Call Girls In Sector 29 Gurgaon❤️8860477959 EscorTs Service In 24/7 Delh...
Best Call Girls In Sector 29 Gurgaon❤️8860477959 EscorTs Service In 24/7 Delh...lizamodels9
 
The dark energy paradox leads to a new structure of spacetime.pptx
The dark energy paradox leads to a new structure of spacetime.pptxThe dark energy paradox leads to a new structure of spacetime.pptx
The dark energy paradox leads to a new structure of spacetime.pptxEran Akiva Sinbar
 
《Queensland毕业文凭-昆士兰大学毕业证成绩单》
《Queensland毕业文凭-昆士兰大学毕业证成绩单》《Queensland毕业文凭-昆士兰大学毕业证成绩单》
《Queensland毕业文凭-昆士兰大学毕业证成绩单》rnrncn29
 
Fertilization: Sperm and the egg—collectively called the gametes—fuse togethe...
Fertilization: Sperm and the egg—collectively called the gametes—fuse togethe...Fertilization: Sperm and the egg—collectively called the gametes—fuse togethe...
Fertilization: Sperm and the egg—collectively called the gametes—fuse togethe...D. B. S. College Kanpur
 
User Guide: Pulsar™ Weather Station (Columbia Weather Systems)
User Guide: Pulsar™ Weather Station (Columbia Weather Systems)User Guide: Pulsar™ Weather Station (Columbia Weather Systems)
User Guide: Pulsar™ Weather Station (Columbia Weather Systems)Columbia Weather Systems
 
Vision and reflection on Mining Software Repositories research in 2024
Vision and reflection on Mining Software Repositories research in 2024Vision and reflection on Mining Software Repositories research in 2024
Vision and reflection on Mining Software Repositories research in 2024AyushiRastogi48
 
Dubai Calls Girl Lisa O525547819 Lexi Call Girls In Dubai
Dubai Calls Girl Lisa O525547819 Lexi Call Girls In DubaiDubai Calls Girl Lisa O525547819 Lexi Call Girls In Dubai
Dubai Calls Girl Lisa O525547819 Lexi Call Girls In Dubaikojalkojal131
 
Four Spheres of the Earth Presentation.ppt
Four Spheres of the Earth Presentation.pptFour Spheres of the Earth Presentation.ppt
Four Spheres of the Earth Presentation.pptJoemSTuliba
 
LIGHT-PHENOMENA-BY-CABUALDIONALDOPANOGANCADIENTE-CONDEZA (1).pptx
LIGHT-PHENOMENA-BY-CABUALDIONALDOPANOGANCADIENTE-CONDEZA (1).pptxLIGHT-PHENOMENA-BY-CABUALDIONALDOPANOGANCADIENTE-CONDEZA (1).pptx
LIGHT-PHENOMENA-BY-CABUALDIONALDOPANOGANCADIENTE-CONDEZA (1).pptxmalonesandreagweneth
 
STOPPED FLOW METHOD & APPLICATION MURUGAVENI B.pptx
STOPPED FLOW METHOD & APPLICATION MURUGAVENI B.pptxSTOPPED FLOW METHOD & APPLICATION MURUGAVENI B.pptx
STOPPED FLOW METHOD & APPLICATION MURUGAVENI B.pptxMurugaveni B
 
Speech, hearing, noise, intelligibility.pptx
Speech, hearing, noise, intelligibility.pptxSpeech, hearing, noise, intelligibility.pptx
Speech, hearing, noise, intelligibility.pptxpriyankatabhane
 
User Guide: Magellan MX™ Weather Station
User Guide: Magellan MX™ Weather StationUser Guide: Magellan MX™ Weather Station
User Guide: Magellan MX™ Weather StationColumbia Weather Systems
 
Call Girls in Munirka Delhi 💯Call Us 🔝8264348440🔝
Call Girls in Munirka Delhi 💯Call Us 🔝8264348440🔝Call Girls in Munirka Delhi 💯Call Us 🔝8264348440🔝
Call Girls in Munirka Delhi 💯Call Us 🔝8264348440🔝soniya singh
 

Recently uploaded (20)

Behavioral Disorder: Schizophrenia & it's Case Study.pdf
Behavioral Disorder: Schizophrenia & it's Case Study.pdfBehavioral Disorder: Schizophrenia & it's Case Study.pdf
Behavioral Disorder: Schizophrenia & it's Case Study.pdf
 
Hot Sexy call girls in Moti Nagar,🔝 9953056974 🔝 escort Service
Hot Sexy call girls in  Moti Nagar,🔝 9953056974 🔝 escort ServiceHot Sexy call girls in  Moti Nagar,🔝 9953056974 🔝 escort Service
Hot Sexy call girls in Moti Nagar,🔝 9953056974 🔝 escort Service
 
(9818099198) Call Girls In Noida Sector 14 (NOIDA ESCORTS)
(9818099198) Call Girls In Noida Sector 14 (NOIDA ESCORTS)(9818099198) Call Girls In Noida Sector 14 (NOIDA ESCORTS)
(9818099198) Call Girls In Noida Sector 14 (NOIDA ESCORTS)
 
Pests of Blackgram, greengram, cowpea_Dr.UPR.pdf
Pests of Blackgram, greengram, cowpea_Dr.UPR.pdfPests of Blackgram, greengram, cowpea_Dr.UPR.pdf
Pests of Blackgram, greengram, cowpea_Dr.UPR.pdf
 
Environmental Biotechnology Topic:- Microbial Biosensor
Environmental Biotechnology Topic:- Microbial BiosensorEnvironmental Biotechnology Topic:- Microbial Biosensor
Environmental Biotechnology Topic:- Microbial Biosensor
 
Topic 9- General Principles of International Law.pptx
Topic 9- General Principles of International Law.pptxTopic 9- General Principles of International Law.pptx
Topic 9- General Principles of International Law.pptx
 
Best Call Girls In Sector 29 Gurgaon❤️8860477959 EscorTs Service In 24/7 Delh...
Best Call Girls In Sector 29 Gurgaon❤️8860477959 EscorTs Service In 24/7 Delh...Best Call Girls In Sector 29 Gurgaon❤️8860477959 EscorTs Service In 24/7 Delh...
Best Call Girls In Sector 29 Gurgaon❤️8860477959 EscorTs Service In 24/7 Delh...
 
The dark energy paradox leads to a new structure of spacetime.pptx
The dark energy paradox leads to a new structure of spacetime.pptxThe dark energy paradox leads to a new structure of spacetime.pptx
The dark energy paradox leads to a new structure of spacetime.pptx
 
《Queensland毕业文凭-昆士兰大学毕业证成绩单》
《Queensland毕业文凭-昆士兰大学毕业证成绩单》《Queensland毕业文凭-昆士兰大学毕业证成绩单》
《Queensland毕业文凭-昆士兰大学毕业证成绩单》
 
Fertilization: Sperm and the egg—collectively called the gametes—fuse togethe...
Fertilization: Sperm and the egg—collectively called the gametes—fuse togethe...Fertilization: Sperm and the egg—collectively called the gametes—fuse togethe...
Fertilization: Sperm and the egg—collectively called the gametes—fuse togethe...
 
User Guide: Pulsar™ Weather Station (Columbia Weather Systems)
User Guide: Pulsar™ Weather Station (Columbia Weather Systems)User Guide: Pulsar™ Weather Station (Columbia Weather Systems)
User Guide: Pulsar™ Weather Station (Columbia Weather Systems)
 
Volatile Oils Pharmacognosy And Phytochemistry -I
Volatile Oils Pharmacognosy And Phytochemistry -IVolatile Oils Pharmacognosy And Phytochemistry -I
Volatile Oils Pharmacognosy And Phytochemistry -I
 
Vision and reflection on Mining Software Repositories research in 2024
Vision and reflection on Mining Software Repositories research in 2024Vision and reflection on Mining Software Repositories research in 2024
Vision and reflection on Mining Software Repositories research in 2024
 
Dubai Calls Girl Lisa O525547819 Lexi Call Girls In Dubai
Dubai Calls Girl Lisa O525547819 Lexi Call Girls In DubaiDubai Calls Girl Lisa O525547819 Lexi Call Girls In Dubai
Dubai Calls Girl Lisa O525547819 Lexi Call Girls In Dubai
 
Four Spheres of the Earth Presentation.ppt
Four Spheres of the Earth Presentation.pptFour Spheres of the Earth Presentation.ppt
Four Spheres of the Earth Presentation.ppt
 
LIGHT-PHENOMENA-BY-CABUALDIONALDOPANOGANCADIENTE-CONDEZA (1).pptx
LIGHT-PHENOMENA-BY-CABUALDIONALDOPANOGANCADIENTE-CONDEZA (1).pptxLIGHT-PHENOMENA-BY-CABUALDIONALDOPANOGANCADIENTE-CONDEZA (1).pptx
LIGHT-PHENOMENA-BY-CABUALDIONALDOPANOGANCADIENTE-CONDEZA (1).pptx
 
STOPPED FLOW METHOD & APPLICATION MURUGAVENI B.pptx
STOPPED FLOW METHOD & APPLICATION MURUGAVENI B.pptxSTOPPED FLOW METHOD & APPLICATION MURUGAVENI B.pptx
STOPPED FLOW METHOD & APPLICATION MURUGAVENI B.pptx
 
Speech, hearing, noise, intelligibility.pptx
Speech, hearing, noise, intelligibility.pptxSpeech, hearing, noise, intelligibility.pptx
Speech, hearing, noise, intelligibility.pptx
 
User Guide: Magellan MX™ Weather Station
User Guide: Magellan MX™ Weather StationUser Guide: Magellan MX™ Weather Station
User Guide: Magellan MX™ Weather Station
 
Call Girls in Munirka Delhi 💯Call Us 🔝8264348440🔝
Call Girls in Munirka Delhi 💯Call Us 🔝8264348440🔝Call Girls in Munirka Delhi 💯Call Us 🔝8264348440🔝
Call Girls in Munirka Delhi 💯Call Us 🔝8264348440🔝
 

Convergence of ABC methods

  • 1. Convergence of ABC methods Christian P. Robert Universit´e Paris-Dauphine, University of Warwick, & IUF joint work with D. Frazier, G. Martin & J. Rousseau
  • 2. Outline 1 Approximate Bayesian computation 2 [some] asymptotics of ABC
  • 3. Approximate Bayesian computation 1 Approximate Bayesian computation ABC basics 2 [some] asymptotics of ABC
  • 4. Untractable likelihoods Cases when the likelihood function f (y|θ) is unavailable and when the completion step f (y|θ) = Z f (y, z|θ) dz is impossible or too costly because of the dimension of z c MCMC cannot be implemented!
  • 5. Untractable likelihoods Cases when the likelihood function f (y|θ) is unavailable and when the completion step f (y|θ) = Z f (y, z|θ) dz is impossible or too costly because of the dimension of z c MCMC cannot be implemented!
  • 6. Illustration Example (Ising & Potts models) Potts model: if y takes values on a grid Y of size kn and f (y|θ) ∝ exp θ l∼i Iyl =yi where l∼i denotes a neighbourhood relation, n moderately large prohibits the computation of the normalising constant Zθ
  • 7. Illustration Example (Ising & Potts models) Potts model: if y takes values on a grid Y of size kn and f (y|θ) ∝ exp θ l∼i Iyl =yi where l∼i denotes a neighbourhood relation, n moderately large prohibits the computation of the normalising constant Zθ Special case of the intractable normalising constant, making the likelihood impossible to compute
  • 8. The ABC method Bayesian setting: target is π(θ)f (x|θ)
  • 9. The ABC method Bayesian setting: target is π(θ)f (x|θ) When likelihood f (x|θ) not in closed form, likelihood-free rejection technique:
  • 10. The ABC method Bayesian setting: target is π(θ)f (x|θ) When likelihood f (x|θ) not in closed form, likelihood-free rejection technique: ABC algorithm For an observation y ∼ f (y|θ), under the prior π(θ), keep jointly simulating θ ∼ π(θ) , z ∼ f (z|θ ) , until the auxiliary variable z is equal to the observed value, z = y. [Tavar´e et al., 1997]
  • 11. Why does it work?! The proof is trivial: f (θi ) ∝ z∈D π(θi )f (z|θi )Iy(z) ∝ π(θi )f (y|θi ) = π(θi |y) . [Accept–Reject 101]
  • 12. A as A...pproximative When y is a continuous random variable, equality z = y is replaced with a tolerance condition, (y, z) ≤ where is a distance
  • 13. A as A...pproximative When y is a continuous random variable, equality z = y is replaced with a tolerance condition, (y, z) ≤ where is a distance Output distributed from π(θ) Pθ{ (y, z) < } ∝ π(θ| (y, z) < ) [Pritchard et al., 1999]
  • 14. ABC algorithm Algorithm 1 Likelihood-free rejection sampler 2 for i = 1 to N do repeat generate θ from the prior distribution π(·) generate z from the likelihood f (·|θ ) until ρ{η(z), η(y)} ≤ set θi = θ end for where η(y) defines a (not necessarily sufficient) statistic
  • 15. Output The likelihood-free algorithm samples from the marginal in z of: π (θ, z|y) = π(θ)f (z|θ)IA ,y (z) A ,y×Θ π(θ)f (z|θ)dzdθ , where A ,y = {z ∈ D|ρ(η(z), η(y)) < }.
  • 16. Output The likelihood-free algorithm samples from the marginal in z of: π (θ, z|y) = π(θ)f (z|θ)IA ,y (z) A ,y×Θ π(θ)f (z|θ)dzdθ , where A ,y = {z ∈ D|ρ(η(z), η(y)) < }. The idea behind ABC is that the summary statistics coupled with a small tolerance should provide a good approximation of a posterior distribution: π (θ|y) = π (θ, z|y)dz ≈ π(θ|η(y)) .
  • 17. MA example Back to the MA(q) model xt = t + q i=1 ϑi t−i Simple prior: uniform over the inverse [real and complex] roots in Q(u) = 1 − q i=1 ϑi ui under the identifiability conditions
  • 18. MA example Back to the MA(q) model xt = t + q i=1 ϑi t−i Simple prior: uniform prior over the identifiability zone, e.g. triangle for MA(2)
  • 19. MA example (2) ABC algorithm thus made of 1 picking a new value (ϑ1, ϑ2) in the triangle 2 generating an iid sequence ( t)−q<t≤T 3 producing a simulated series (xt)1≤t≤T
  • 20. MA example (2) ABC algorithm thus made of 1 picking a new value (ϑ1, ϑ2) in the triangle 2 generating an iid sequence ( t)−q<t≤T 3 producing a simulated series (xt)1≤t≤T Distance: basic distance between the series ρ((xt)1≤t≤T , (xt)1≤t≤T ) = T t=1 (xt − xt)2 or distance between summary statistics like the q autocorrelations τj = T t=j+1 xtxt−j
  • 21. Comparison of distance impact Evaluation of the tolerance on the ABC sample against both distances ( = 100%, 10%, 1%, 0.1%) for an MA(2) model
  • 22. Comparison of distance impact 0.0 0.2 0.4 0.6 0.8 01234 θ1 −2.0 −1.0 0.0 0.5 1.0 1.5 0.00.51.01.5 θ2 Evaluation of the tolerance on the ABC sample against both distances ( = 100%, 10%, 1%, 0.1%) for an MA(2) model
  • 23. Comparison of distance impact 0.0 0.2 0.4 0.6 0.8 01234 θ1 −2.0 −1.0 0.0 0.5 1.0 1.5 0.00.51.01.5 θ2 Evaluation of the tolerance on the ABC sample against both distances ( = 100%, 10%, 1%, 0.1%) for an MA(2) model
  • 24. ABC as knn [Biau et al., 2013, Annales de l’IHP] Practice of ABC: determine tolerance as a quantile on observed distances, say 10% or 1% quantile, = N = qα(d1, . . . , dN)
  • 25. ABC as knn [Biau et al., 2013, Annales de l’IHP] Practice of ABC: determine tolerance as a quantile on observed distances, say 10% or 1% quantile, = N = qα(d1, . . . , dN) • Interpretation of ε as nonparametric bandwidth only approximation of the actual practice [Blum & Fran¸cois, 2010]
  • 26. ABC as knn [Biau et al., 2013, Annales de l’IHP] Practice of ABC: determine tolerance as a quantile on observed distances, say 10% or 1% quantile, = N = qα(d1, . . . , dN) • Interpretation of ε as nonparametric bandwidth only approximation of the actual practice [Blum & Fran¸cois, 2010] • ABC is a k-nearest neighbour (knn) method with kN = N N [Loftsgaarden & Quesenberry, 1965]
  • 27. ABC consistency Provided kN/ log log N −→ ∞ and kN/N −→ 0 as N → ∞, for almost all s0 (with respect to the distribution of S), with probability 1, 1 kN kN j=1 ϕ(θj ) −→ E[ϕ(θj )|S = s0] [Devroye, 1982]
  • 28. ABC consistency Provided kN/ log log N −→ ∞ and kN/N −→ 0 as N → ∞, for almost all s0 (with respect to the distribution of S), with probability 1, 1 kN kN j=1 ϕ(θj ) −→ E[ϕ(θj )|S = s0] [Devroye, 1982] Biau et al. (2013) also recall pointwise and integrated mean square error consistency results on the corresponding kernel estimate of the conditional posterior distribution, under constraints kN → ∞, kN /N → 0, hN → 0 and hp N kN → ∞,
  • 29. Rates of convergence Further assumptions (on target and kernel) allow for precise (integrated mean square) convergence rates (as a power of the sample size N), derived from classical k-nearest neighbour regression, like • when m = 1, 2, 3, kN ≈ N(p+4)/(p+8) and rate N − 4 p+8 • when m = 4, kN ≈ N(p+4)/(p+8) and rate N − 4 p+8 log N • when m > 4, kN ≈ N(p+4)/(m+p+4) and rate N − 4 m+p+4 [Biau et al., 2013]
  • 30. Rates of convergence Further assumptions (on target and kernel) allow for precise (integrated mean square) convergence rates (as a power of the sample size N), derived from classical k-nearest neighbour regression, like • when m = 1, 2, 3, kN ≈ N(p+4)/(p+8) and rate N − 4 p+8 • when m = 4, kN ≈ N(p+4)/(p+8) and rate N − 4 p+8 log N • when m > 4, kN ≈ N(p+4)/(m+p+4) and rate N − 4 m+p+4 [Biau et al., 2013] Drag: Only applies to sufficient summary statistics
  • 31. [some] asymptotics of ABC 1 Approximate Bayesian computation 2 [some] asymptotics of ABC asymptotic setup consistency of ABC posteriors asymptotic shape of posterior distribution asymptotic behaviour of EABC [θ]
  • 32. asymptotic setup • asymptotic: y = y(n) ∼ Pn θ and = n, n → +∞ • parametric: θ ∈ Rk, k fixed • concentration of summary statistics η(zn): ∃b : θ → b(θ) η(zn ) − b(θ) = oP θ (1), ∀θ
  • 33. asymptotic setup • asymptotic: y = y(n) ∼ Pn θ and = n, n → +∞ • parametric: θ ∈ Rk, k fixed • concentration of summary statistics η(zn): ∃b : θ → b(θ) η(zn ) − b(θ) = oP θ (1), ∀θ Objects of interest: • posterior concentration and asymptotic shape of π (·|η(y(n))) (normality?) • convergence of the posterior mean ˆθ = EABC[θ|η(y(n))] • asymptotic acceptance rate [Frazier et al., 2016]
  • 34. consistency of ABC posteriors ABC algorithm Bayesian consistent at θ0 if for any δ > 0, P ( θ − θ0 > δ| η(y) − η(z) ≤ ε) → 0 as n → +∞, ε → 0 Bayesian consistency implies that sets containing θ0 have posterior probability tending to one as n → +∞, with implication being the existence of a specific rate of concentration
  • 35. consistency of ABC posteriors ABC algorithm Bayesian consistent at θ0 if for any δ > 0, P ( θ − θ0 > δ| η(y) − η(z) ≤ ε) → 0 as n → +∞, ε → 0 • Concentration around true value and Bayesian consistency impose less stringent conditions on the convergence speed of tolerance n to zero, when compared with asymptotic normality of ABC posterior • asymptotic normality of ABC posterior mean does not require asymptotic normality of ABC posterior
  • 36. consistency of ABC posteriors • Concentration of summary η(z): there exists b(θ) such that η(z) − b(θ) = oP θ (1) • Consistency: Π n ( θ − θ0 ≤ δ|η(y)) = 1 + op(1) • Convergence rate: there exists δn = o(1) such that Π n ( θ − θ0 ≤ δn|η(y)) = 1 + op(1)
  • 37. consistency of ABC posteriors • Consistency: Π n ( θ − θ0 ≤ δ|η(y)) = 1 + op(1) • Convergence rate: there exists δn = o(1) such that Π n ( θ − θ0 ≤ δn|η(y)) = 1 + op(1) • Point estimator consistency ˆθ = EABC [θ|η(y(n) )], EABC [θ|η(y(n) )] − θ0 = op(1) vn(EABC [θ|η(y(n) )] − θ0) ⇒ N(0, v)
  • 38. Rate of convergence Π (·| η(y) − η(z) ≤ ε) concentrates at rate λn → 0 if lim sup ε→0 lim sup n→+∞ Π ( θ − θ0 > λnM| η(y)η(z) ≤ ε) → 0 in P0-probability when M goes to infinity.
  • 39. Rate of convergence Π (·| η(y) − η(z) ≤ ε) concentrates at rate λn → 0 if lim sup ε→0 lim sup n→+∞ Π ( θ − θ0 > λnM| η(y)η(z) ≤ ε) → 0 in P0-probability when M goes to infinity. Posterior rate of concentration related to rate at which information accumulates about true parameter vector
  • 40. Related results existing studies on the large sample properties of ABC, in which the asymptotic properties of point estimators derived from ABC have been the primary focus [Creel et al., 2015; Jasra, 2015; Li & Fearnhead, 2015]
  • 41. Convergence when n σn Under (main) assumptions (A1) ∃σn → +∞ Pθ σ−1 n η(z) − b(θ) > u ≤ c(θ)h(u), lim u→+∞ h(u) = 0 (A2) Π( b(θ) − b(θ0) ≤ u) uD , u ≈ 0 posterior consistency posterior concentration rate λn that depends on the deviation control of d2{η(z), b(θ)} posterior concentration rate for b(θ) bounded from below by O( n)
  • 42. Convergence when n σn Under (main) assumptions (A1) ∃σn → +∞ Pθ σ−1 n η(z) − b(θ) > u ≤ c(θ)h(u), lim u→+∞ h(u) = 0 (A2) Π( b(θ) − b(θ0) ≤ u) uD , u ≈ 0 then Π n b(θ) − b(θ0) n + σnh−1 ( D n )|η(y) = 1 + op0 (1) If also θ − θ0 ≤ L b(θ) − c(θ0) α, locally and θ → b(θ) 1-1 Π n ( θ − θ0 α n + σα n (h−1 ( D n ))α δn |η(y)) = 1 + op0 (1)
  • 43. Comments (A1) : if Pθ σ−1 n η(z) − b(θ) > u ≤ c(θ)h(u), two cases 1 Polynomial tail: h(u) u−κ , then δn = n + σn −D/κ n 2 Exponential tail: h(u) e−cu , then δn = n + σn log(1/ n)
  • 44. Comments (A1) : if Pθ σ−1 n η(z) − b(θ) > u ≤ c(θ)h(u), two cases 1 Polynomial tail: h(u) u−κ , then δn = n + σn −D/κ n 2 Exponential tail: h(u) e−cu , then δn = n + σn log(1/ n) • E.g., η(y) = n−1 i g(yi ) with moments on g (case 1) or Laplace transform (case 2)
  • 45. Comments (A2) : Π( b(θ) − b(θ0) ≤ u) uD : If Π regular enough then D = dim(θ) • no need to approximate the density f (η(y)|θ). • Same results holds when n = o(σn) if (A1) replaced with inf |x|≤M Pθ σ−1 n (η(z) − b(θ)) − x ≤ u uD , u ≈ 0
  • 46. proof Simple enough proof: assume σn ≤ δ n and η(y) − b(θ0) σn, η(y) − η(z) ≤ n Hence b(θ) − b(θ0) > δn ⇒ η(z) − b(θ) > δn − n − σn := tn
  • 47. proof Simple enough proof: assume σn ≤ δ n and η(y) − b(θ0) σn, η(y) − η(z) ≤ n Hence b(θ) − b(θ0) > δn ⇒ η(z) − b(θ) > δn − n − σn := tn Also, if b(θ) − b(θ0) ≤ n/3 η(y) − η(z) ≤ η(z) − b(θ) + σn ≤ n/3 + n/3 and Π n ( b(θ) − b(θ0) > δn|y) ≤ b(θ)−b(θ0) >δn Pθ ( η(z) − b(θ) > tn) dΠ(θ) |b(θ)−b(θ0)|≤ n/3 Pθ ( η(z) − b(θ) ≤ n/3) dΠ(θ) −D n h(tnσ−1 n ) Θ c(θ)dΠ(θ)
  • 48. Summary statistic and (in)consistency Consider the moving average MA(2) model yt = et + θ1et−1 + θ2et−2, et ∼i.i.d. N(0, 1) and −2 ≤ θ1 ≤ 2, θ1 + θ2 ≥ −1, θ1 − θ2 ≤ 1.
  • 49. Summary statistic and (in)consistency Consider the moving average MA(2) model yt = et + θ1et−1 + θ2et−2, et ∼i.i.d. N(0, 1) and −2 ≤ θ1 ≤ 2, θ1 + θ2 ≥ −1, θ1 − θ2 ≤ 1. summary statistics equal to sample autocovariances ηj (y) = T−1 T t=1+j yt yt−j j = 0, 1 with η0(y) P → E[y2 t ] = 1 + (θ01)2 + (θ02)2 and η1(y) P → E[yt yt−1] = θ01(1 + θ02) For ABC target pε (θ|η(y)) to be degenerate at θ0 0 = b(θ0) − b (θ) = 1 + (θ01)2 + (θ02)2 θ01(1 + θ02) − 1 + (θ1)2 + (θ2)2 θ1(1 + θ2) must have unique solution θ = θ0 Take θ01 = .6, θ02 = .2: equation has two solutions θ1 = .6, θ2 = .2 and θ1 ≈ .5453, θ2 ≈ .3204
  • 50. asymptotic shape of posterior distribution Shape of Π (·| η(y), η(z) ≤ εn) for several connections between εn and rate at which η(yn) satisfy CLT
  • 51. asymptotic shape of posterior distribution Shape of Π (·| η(y), η(z) ≤ εn) for several connections between εn and rate at which η(yn) satisfy CLT Three different regimes: 1 σn = o( n) −→ Uniform limit 2 σn n −→ perturbated Gaussian limit 3 σn n −→ Gaussian limit
  • 52. scaling matrices Introduction of sequence of (k, k) p.d. matrices Σn(θ) such that for all θ near θ0 c1 Dn ∗ ≤ Σn(θ) ∗ ≤ c2 Dn ∗, Dn = diag(dn(1), · · · , dn(k)), with 0 < c1, c2 < +∞, dn(j) → +∞ for all j Possibly different convergence rates for components of η(z) Reordering components so that dn(1) ≤ · · · ≤ dn(k) with assumption that lim inf n dn(j)εn = lim sup n dn(j)εn
  • 53. new assumptions (B1) Concentration of summary η: Σn(θ) ∈ Rk1×k1 is o(1) Σn(θ)−1 {η(z)−b(θ)} ⇒ Nk1 (0, Id), (Σn(θ)Σn(θ0)−1 )n = Co (B2) b(θ) is C1 and θ − θ0 b(θ) − b(θ0) (B3) Dominated convergence and lim n Pθ(Σn(θ)−1{η(z) − b(θ)} ∈ u + B(0, un)) j un(j) = ϕ(u)
  • 54. main result Set Σn(θ) = σnD(θ) for θ ≈ θ0 and Zo = Σn(θ0)−1(η(y) − b(θ0)), then under (B1) and (B2) • when nσ−1 n → +∞ Π n [ −1 n (θ−θ0) ∈ A|y] ⇒ UB0 (A), B0 = {x ∈ Rk ; b (θ0)T x ≤ 1}
  • 55. main result Set Σn(θ) = σnD(θ) for θ ≈ θ0 and Zo = Σn(θ0)−1(η(y) − b(θ0)), then under (B1) and (B2) • when nσ−1 n → c Π n [Σn(θ0)−1 (θ − θ0) − Zo ∈ A|y] ⇒ Qc(A), Qc = N
  • 56. main result Set Σn(θ) = σnD(θ) for θ ≈ θ0 and Zo = Σn(θ0)−1(η(y) − b(θ0)), then under (B1) and (B2) • when nσ−1 n → 0 and (B3) holds, set Vn = [b (θ0)]n Σn(θ0)b (θ0) then Π n [V −1 n (θ − θ0) − ˜Zo ∈ A|y] ⇒ Φ(A),
  • 57. intuition (?!) Set x(θ) = σ−1 n (θ − θ0) − Zo (k = 1) πn := Π n [ −1 n (θ − θ0) ∈ A|y] = |θ−θ0|≤un Ix(θ)∈A Pθ σ−1 n (η(z) − b(θ)) + x(θ) ≤ σ−1 n n p(θ)dθ |θ−θ0|≤un Pθ σ−1 n (η(z) − b(θ)) + x(θ) ≤ σ−1 n n p(θ)dθ + op(1) • If n/σn 1 : Pθ σ−1 n (η(z) − b(θ)) + x(θ) ≤ σ−1 n n = 1+o(1), iff x ≤ σ−1 n n+o(1) • If n/σn = o(1) Pθ σ−1 n (η(z) − b(θ)) + x ≤ σ−1 n n = φ(x)σn(1 + o(1))
  • 58. more comments • Surprising : U(− n, n) limit when n σn
  • 59. more comments • Surprising : U(− n, n) limit when n σn but not that surprising since n = o(1) means concentration around θ0 and σn = o( n) implies that b(θ) − b(θ0) ≈ η(z) − η(y) • again, no need to control approximation of f (η(y)|θ) by a Gaussian density: merely a control of the distribution • generalisation to the case where eigenvalues of Σn are dn,1 = · · · = dn,k • behaviour of EABC (θ|y) consistent with Li & Fearnhead (2016)
  • 60. even more comments If (also) p(θ) is H¨older β EABC (θ|y) − θ0 = σn Zo b(θ0) score for f (η(y)|θ) + β/2 j=1 2j n Hj (θ0, p, b) bias from threshold approx +o(σn) + O( β+1 n ) with • if 2 n = o(σn) : Efficiency EABC (θ|y) − θ0 = σn Zo b(θ0) + o(σn) • the Hj (θ0, p, b)’s are deterministic we gain nothing by getting a first crude ˆθ(y) = EABC (θ|y) for some η(y) and then rerun ABC with ˆθ(y)
  • 61. asymptotic behaviour of EABC [θ] When p = dim(η(y)) = d = dim(θ) and n = o(n−3/10) EABC [dT (θ − θ0)|yo ] ⇒ N(0, ( bo )T Σ−1 bo −1 [Li & Fearnhead (2016)] In fact, if β+1 n √ n = o(1), with β H¨older-smoothness of π EABC [(θ−θ0)|yo ] = ( bo)−1Zo √ n + k j=1 hj (θ0) 2j n +op(1), 2k = β
  • 62. asymptotic behaviour of EABC [θ] When p = dim(η(y)) = d = dim(θ) and n = o(n−3/10) EABC [dT (θ − θ0)|yo ] ⇒ N(0, ( bo )T Σ−1 bo −1 [Li & Fearnhead (2016)] In fact, if β+1 n √ n = o(1), with β H¨older-smoothness of π EABC [(θ−θ0)|yo ] = ( bo)−1Zo √ n + k j=1 hj (θ0) 2j n +op(1), 2k = β Iterating for fixed p mildly interesting: if ˜η(y) = EABC [θ|yo ] then EABC [θ|˜η(y)] = θ0 + ( bo)−1Zo √ n + π (θ0) π(θ0) 2 n + o() [Fearnhead & Prangle, 2012]
  • 63. more asymptotic behaviour of EABC [θ] Li and Fearnhead (2016,2017) consider that EABC [dT (θ − θ0)|yo ] not optimal when p > d • If √ n 2 n = o(1) and n √ n = o(1) √ n[EABC (θ) − θ0] = P bo Zo + op(1) Zo = √ n(η(y) − bo ) P bo Zo = (( bo )T bo )−1 ( bo )T Zo and Vas(P bo Zo ) ≥ ( bo )T Vas(Zo )−1 ( bo ) −1 • If n √ n = o(1) √ n[EABC (θ)−θ0] = ( bo )T Σ−1 bo −1 ( bo )T Σ−1 Zo +op(1)
  • 64. impact of the dimension of η dimension of η(.) does not impact above result, but impacts acceptance probability • if n = o(σn), k1 = dim(η(y)), k = dim(θ) & k1 ≥ k αn := Pr ( y − z ≤ n) k1 n σ−k1+k n • if n σn αn := Pr ( y − z ≤ n) k n • If we choose αn • αn = o(σk n ) leads to n = σn(αnσ−k n )1/k1 = o(σn) • αn σn leads to n α 1/k n .
  • 65. conclusion on ABC consistency • asymptotic description of ABC: different regimes depending on n & σn • no point in choosing n arbitrarily small: just n = o(σn) • no asymptotic gain in iterative ABC • results under weak conditions by not studying g(η(z)|θ)