SlideShare a Scribd company logo
1 of 13
Download to read offline
About discretising Hamiltonians

             Christian P. Robert

      Universit´ Paris-Dauphine and CREST
               e
       http://xianblog.wordpress.com


Royal Statistical Society, October 13, 2010




        Christian P. Robert   About discretising Hamiltonians
Hamiltonian dynamics


   Dynamic on the level sets of
                               1                    1
      H (θ, p) = −L(θ) +         log{(2π)D |G(θ)|} + pT G(θ)−1 p ,
                               2                    2
   where p is an auxiliary vector of dimension D, is associated with
   Hamilton’s pde’s
                       ∂H                      ˙ ∂H (θ, p)
                  ˙
                  p=       (θ, p) ,            θ=
                        ∂p                        ∂θ

    which preserve the potential H (θ, p) and hence the target
   distribution at all times t




                       Christian P. Robert   About discretising Hamiltonians
Discretised Hamiltonian



   Girolami and Calderhead reproduce Hamiltonian equations within
   the simulation domain by discretisation via the generalised leapfrog
   (!) generator,
                                         [Subliminal French bashing?!]




                       Christian P. Robert   About discretising Hamiltonians
Discretised Hamiltonian




   Girolami and Calderhead reproduce Hamiltonian equations within
   the simulation domain by discretisation via the generalised leapfrog
   (!) generator,
   but...




                       Christian P. Robert   About discretising Hamiltonians
Discretised Hamiltonian




   Girolami and Calderhead reproduce Hamiltonian equations within
   the simulation domain by discretisation via the generalised leapfrog
   (!) generator,
   but...
   invariance and stability properties of the [background] continuous
   time process the method do not carry to the discretised version of
   the process [e.g., Langevin]




                       Christian P. Robert   About discretising Hamiltonians
Discretised Hamiltonian (2)



      Is it useful to so painstakingly reproduce the continuous
      behaviour?
      Approximations (see R&R’s Langevin) can be corrected by a
      Metropolis-Hastings step, so why bother with a second level
      of approximation?
      Discretisation induces a calibration problem: how long is long
      enough?
      Convergence issues (for the MCMC algorithm) should not be
      impacted by inexact renderings of the continuous time process
      in discrete time: loss of efficiency?




                     Christian P. Robert   About discretising Hamiltonians
An illustration

   Comparison of the fits of discretised Langevin diffusion sequences
   to the target f (x) ∝ exp(−x4 ) when using a discretisation step
   σ 2 = .1 and σ 2 = .0001, after the same number T = 107 of steps.

                            0.6
                            0.5
                            0.4
                  Density

                            0.3
                            0.2
                            0.1
                            0.0




                                  −1.5        −1.0    −0.5     0.0        0.5      1.0     1.5




                                         Christian P. Robert         About discretising Hamiltonians
An illustration

   Comparison of the fits of discretised Langevin diffusion sequences
   to the target f (x) ∝ exp(−x4 ) when using a discretisation step
   σ 2 = .1 and σ 2 = .0001, after the same number T = 107 of steps.

                            0.8
                            0.6
                  Density

                            0.4
                            0.2
                            0.0




                                  −1.5        −1.0    −0.5     0.0        0.5      1.0      1.5




                                         Christian P. Robert         About discretising Hamiltonians
An illustration

   Comparison of the fits of discretised Langevin diffusion sequences
   to the target f (x) ∝ exp(−x4 ) when using a discretisation step
   σ 2 = .1 and σ 2 = .0001, after the same number T = 107 of steps.

                         1e+05
                         8e+04
                         6e+04
                  time

                         4e+04
                         2e+04
                         0e+00




                                 −2            −1           0              1             2




                                      Christian P. Robert       About discretising Hamiltonians
Back on Langevin

   For the Langevin diffusion, the corresponding Langevin
   (discretised) algorithm could as well use another scale η for the
   gradient, rather than the one τ used for the noise




                        Christian P. Robert   About discretising Hamiltonians
Back on Langevin

   For the Langevin diffusion, the corresponding Langevin
   (discretised) algorithm could as well use another scale η for the
   gradient, rather than the one τ used for the noise

                          y = xt + η∇π(x) + τ ǫt
   rather than a strict Euler discretisation

                        y = xt + τ 2 ∇π(x)/2 + τ ǫt




                        Christian P. Robert   About discretising Hamiltonians
Back on Langevin

   For the Langevin diffusion, the corresponding Langevin
   (discretised) algorithm could as well use another scale η for the
   gradient, rather than the one τ used for the noise

                          y = xt + η∇π(x) + τ ǫt
   rather than a strict Euler discretisation

                        y = xt + τ 2 ∇π(x)/2 + τ ǫt

   A few experiments run in Robert and Casella (1999, Chap. 6, §6.5)
   hinted that using a scale η = τ 2 /2 could actually lead to
   improvements




                        Christian P. Robert   About discretising Hamiltonians
Back on Langevin

   For the Langevin diffusion, the corresponding Langevin
   (discretised) algorithm could as well use another scale η for the
   gradient, rather than the one τ used for the noise

                          y = xt + η∇π(x) + τ ǫt
   rather than a strict Euler discretisation

                        y = xt + τ 2 ∇π(x)/2 + τ ǫt

   A few experiments run in Robert and Casella (1999, Chap. 6, §6.5)
   hinted that using a scale η = τ 2 /2 could actually lead to
   improvements
   Which [independent] framework should we adopt for
   assessing discretised diffusions?


                        Christian P. Robert   About discretising Hamiltonians

More Related Content

What's hot

Introduction to MCMC methods
Introduction to MCMC methodsIntroduction to MCMC methods
Introduction to MCMC methodsChristian Robert
 
MCMC and likelihood-free methods
MCMC and likelihood-free methodsMCMC and likelihood-free methods
MCMC and likelihood-free methodsChristian Robert
 
Approximate Bayesian Computation with Quasi-Likelihoods
Approximate Bayesian Computation with Quasi-LikelihoodsApproximate Bayesian Computation with Quasi-Likelihoods
Approximate Bayesian Computation with Quasi-LikelihoodsStefano Cabras
 
Unbiased Bayes for Big Data
Unbiased Bayes for Big DataUnbiased Bayes for Big Data
Unbiased Bayes for Big DataChristian Robert
 
Bayesian hybrid variable selection under generalized linear models
Bayesian hybrid variable selection under generalized linear modelsBayesian hybrid variable selection under generalized linear models
Bayesian hybrid variable selection under generalized linear modelsCaleb (Shiqiang) Jin
 
ABC with Wasserstein distances
ABC with Wasserstein distancesABC with Wasserstein distances
ABC with Wasserstein distancesChristian Robert
 
Jyokyo-kai-20120605
Jyokyo-kai-20120605Jyokyo-kai-20120605
Jyokyo-kai-20120605ketanaka
 
Maximum likelihood estimation of regularisation parameters in inverse problem...
Maximum likelihood estimation of regularisation parameters in inverse problem...Maximum likelihood estimation of regularisation parameters in inverse problem...
Maximum likelihood estimation of regularisation parameters in inverse problem...Valentin De Bortoli
 
Delayed acceptance for Metropolis-Hastings algorithms
Delayed acceptance for Metropolis-Hastings algorithmsDelayed acceptance for Metropolis-Hastings algorithms
Delayed acceptance for Metropolis-Hastings algorithmsChristian Robert
 
Approximating Bayes Factors
Approximating Bayes FactorsApproximating Bayes Factors
Approximating Bayes FactorsChristian Robert
 
comments on exponential ergodicity of the bouncy particle sampler
comments on exponential ergodicity of the bouncy particle samplercomments on exponential ergodicity of the bouncy particle sampler
comments on exponential ergodicity of the bouncy particle samplerChristian Robert
 
Bayesian model choice in cosmology
Bayesian model choice in cosmologyBayesian model choice in cosmology
Bayesian model choice in cosmologyChristian Robert
 

What's hot (20)

Introduction to MCMC methods
Introduction to MCMC methodsIntroduction to MCMC methods
Introduction to MCMC methods
 
MCMC and likelihood-free methods
MCMC and likelihood-free methodsMCMC and likelihood-free methods
MCMC and likelihood-free methods
 
Approximate Bayesian Computation with Quasi-Likelihoods
Approximate Bayesian Computation with Quasi-LikelihoodsApproximate Bayesian Computation with Quasi-Likelihoods
Approximate Bayesian Computation with Quasi-Likelihoods
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
Unbiased Bayes for Big Data
Unbiased Bayes for Big DataUnbiased Bayes for Big Data
Unbiased Bayes for Big Data
 
Bayesian hybrid variable selection under generalized linear models
Bayesian hybrid variable selection under generalized linear modelsBayesian hybrid variable selection under generalized linear models
Bayesian hybrid variable selection under generalized linear models
 
ABC with Wasserstein distances
ABC with Wasserstein distancesABC with Wasserstein distances
ABC with Wasserstein distances
 
Jyokyo-kai-20120605
Jyokyo-kai-20120605Jyokyo-kai-20120605
Jyokyo-kai-20120605
 
Maximum likelihood estimation of regularisation parameters in inverse problem...
Maximum likelihood estimation of regularisation parameters in inverse problem...Maximum likelihood estimation of regularisation parameters in inverse problem...
Maximum likelihood estimation of regularisation parameters in inverse problem...
 
QMC: Operator Splitting Workshop, Thresholdings, Robustness, and Generalized ...
QMC: Operator Splitting Workshop, Thresholdings, Robustness, and Generalized ...QMC: Operator Splitting Workshop, Thresholdings, Robustness, and Generalized ...
QMC: Operator Splitting Workshop, Thresholdings, Robustness, and Generalized ...
 
Delayed acceptance for Metropolis-Hastings algorithms
Delayed acceptance for Metropolis-Hastings algorithmsDelayed acceptance for Metropolis-Hastings algorithms
Delayed acceptance for Metropolis-Hastings algorithms
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
NC time seminar
NC time seminarNC time seminar
NC time seminar
 
Approximating Bayes Factors
Approximating Bayes FactorsApproximating Bayes Factors
Approximating Bayes Factors
 
ABC in Venezia
ABC in VeneziaABC in Venezia
ABC in Venezia
 
comments on exponential ergodicity of the bouncy particle sampler
comments on exponential ergodicity of the bouncy particle samplercomments on exponential ergodicity of the bouncy particle sampler
comments on exponential ergodicity of the bouncy particle sampler
 
Bayesian model choice in cosmology
Bayesian model choice in cosmologyBayesian model choice in cosmology
Bayesian model choice in cosmology
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
Tsp is NP-Complete
Tsp is NP-CompleteTsp is NP-Complete
Tsp is NP-Complete
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
 

Viewers also liked

RSS Read Paper by Mark Girolami
RSS Read Paper by Mark GirolamiRSS Read Paper by Mark Girolami
RSS Read Paper by Mark GirolamiChristian Robert
 
ABC short course: model choice chapter
ABC short course: model choice chapterABC short course: model choice chapter
ABC short course: model choice chapterChristian Robert
 
ABC short course: survey chapter
ABC short course: survey chapterABC short course: survey chapter
ABC short course: survey chapterChristian Robert
 
ABC short course: introduction chapters
ABC short course: introduction chaptersABC short course: introduction chapters
ABC short course: introduction chaptersChristian Robert
 
ABC short course: final chapters
ABC short course: final chaptersABC short course: final chapters
ABC short course: final chaptersChristian Robert
 

Viewers also liked (6)

RSS Read Paper by Mark Girolami
RSS Read Paper by Mark GirolamiRSS Read Paper by Mark Girolami
RSS Read Paper by Mark Girolami
 
ABC workshop: 17w5025
ABC workshop: 17w5025ABC workshop: 17w5025
ABC workshop: 17w5025
 
ABC short course: model choice chapter
ABC short course: model choice chapterABC short course: model choice chapter
ABC short course: model choice chapter
 
ABC short course: survey chapter
ABC short course: survey chapterABC short course: survey chapter
ABC short course: survey chapter
 
ABC short course: introduction chapters
ABC short course: introduction chaptersABC short course: introduction chapters
ABC short course: introduction chapters
 
ABC short course: final chapters
ABC short course: final chaptersABC short course: final chapters
ABC short course: final chapters
 

Similar to RSS discussion of Girolami and Calderhead, October 13, 2010

On probability distributions
On probability distributionsOn probability distributions
On probability distributionsEric Xihui Lin
 
Building Compatible Bases on Graphs, Images, and Manifolds
Building Compatible Bases on Graphs, Images, and ManifoldsBuilding Compatible Bases on Graphs, Images, and Manifolds
Building Compatible Bases on Graphs, Images, and ManifoldsDavide Eynard
 
Newton's Raphson method
Newton's Raphson methodNewton's Raphson method
Newton's Raphson methodSaloni Singhal
 
Can we estimate a constant?
Can we estimate a constant?Can we estimate a constant?
Can we estimate a constant?Christian Robert
 
Nature-Inspired Metaheuristic Algorithms for Optimization and Computational I...
Nature-Inspired Metaheuristic Algorithms for Optimization and Computational I...Nature-Inspired Metaheuristic Algorithms for Optimization and Computational I...
Nature-Inspired Metaheuristic Algorithms for Optimization and Computational I...Xin-She Yang
 
Cs229 cvxopt
Cs229 cvxoptCs229 cvxopt
Cs229 cvxoptcerezaso
 
Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...Pierre Jacob
 
signal space analysis.ppt
signal space analysis.pptsignal space analysis.ppt
signal space analysis.pptPatrickMumba7
 
IVR - Chapter 1 - Introduction
IVR - Chapter 1 - IntroductionIVR - Chapter 1 - Introduction
IVR - Chapter 1 - IntroductionCharles Deledalle
 
Time Series Analysis
Time Series AnalysisTime Series Analysis
Time Series AnalysisAmit Ghosh
 
Some Thoughts on Sampling
Some Thoughts on SamplingSome Thoughts on Sampling
Some Thoughts on SamplingDon Sheehy
 
Pinning and facetting in multiphase LBMs
Pinning and facetting in multiphase LBMsPinning and facetting in multiphase LBMs
Pinning and facetting in multiphase LBMsTim Reis
 

Similar to RSS discussion of Girolami and Calderhead, October 13, 2010 (20)

Signal Processing Homework Help
Signal Processing Homework HelpSignal Processing Homework Help
Signal Processing Homework Help
 
On probability distributions
On probability distributionsOn probability distributions
On probability distributions
 
Building Compatible Bases on Graphs, Images, and Manifolds
Building Compatible Bases on Graphs, Images, and ManifoldsBuilding Compatible Bases on Graphs, Images, and Manifolds
Building Compatible Bases on Graphs, Images, and Manifolds
 
Newton's Raphson method
Newton's Raphson methodNewton's Raphson method
Newton's Raphson method
 
Can we estimate a constant?
Can we estimate a constant?Can we estimate a constant?
Can we estimate a constant?
 
Nature-Inspired Metaheuristic Algorithms for Optimization and Computational I...
Nature-Inspired Metaheuristic Algorithms for Optimization and Computational I...Nature-Inspired Metaheuristic Algorithms for Optimization and Computational I...
Nature-Inspired Metaheuristic Algorithms for Optimization and Computational I...
 
probability assignment help (2)
probability assignment help (2)probability assignment help (2)
probability assignment help (2)
 
Quadrature
QuadratureQuadrature
Quadrature
 
Cs229 cvxopt
Cs229 cvxoptCs229 cvxopt
Cs229 cvxopt
 
Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...
 
signal space analysis.ppt
signal space analysis.pptsignal space analysis.ppt
signal space analysis.ppt
 
IVR - Chapter 1 - Introduction
IVR - Chapter 1 - IntroductionIVR - Chapter 1 - Introduction
IVR - Chapter 1 - Introduction
 
Rdnd2008
Rdnd2008Rdnd2008
Rdnd2008
 
Adc
AdcAdc
Adc
 
Time Series Analysis
Time Series AnalysisTime Series Analysis
Time Series Analysis
 
Some Thoughts on Sampling
Some Thoughts on SamplingSome Thoughts on Sampling
Some Thoughts on Sampling
 
patel
patelpatel
patel
 
Pinning and facetting in multiphase LBMs
Pinning and facetting in multiphase LBMsPinning and facetting in multiphase LBMs
Pinning and facetting in multiphase LBMs
 
Serie de dyson
Serie de dysonSerie de dyson
Serie de dyson
 
NTU_paper
NTU_paperNTU_paper
NTU_paper
 

More from Christian Robert

Asymptotics of ABC, lecture, Collège de France
Asymptotics of ABC, lecture, Collège de FranceAsymptotics of ABC, lecture, Collège de France
Asymptotics of ABC, lecture, Collège de FranceChristian Robert
 
Workshop in honour of Don Poskitt and Gael Martin
Workshop in honour of Don Poskitt and Gael MartinWorkshop in honour of Don Poskitt and Gael Martin
Workshop in honour of Don Poskitt and Gael MartinChristian Robert
 
How many components in a mixture?
How many components in a mixture?How many components in a mixture?
How many components in a mixture?Christian Robert
 
Testing for mixtures at BNP 13
Testing for mixtures at BNP 13Testing for mixtures at BNP 13
Testing for mixtures at BNP 13Christian Robert
 
Inferring the number of components: dream or reality?
Inferring the number of components: dream or reality?Inferring the number of components: dream or reality?
Inferring the number of components: dream or reality?Christian Robert
 
Testing for mixtures by seeking components
Testing for mixtures by seeking componentsTesting for mixtures by seeking components
Testing for mixtures by seeking componentsChristian Robert
 
discussion on Bayesian restricted likelihood
discussion on Bayesian restricted likelihooddiscussion on Bayesian restricted likelihood
discussion on Bayesian restricted likelihoodChristian Robert
 
NCE, GANs & VAEs (and maybe BAC)
NCE, GANs & VAEs (and maybe BAC)NCE, GANs & VAEs (and maybe BAC)
NCE, GANs & VAEs (and maybe BAC)Christian Robert
 
Coordinate sampler : A non-reversible Gibbs-like sampler
Coordinate sampler : A non-reversible Gibbs-like samplerCoordinate sampler : A non-reversible Gibbs-like sampler
Coordinate sampler : A non-reversible Gibbs-like samplerChristian Robert
 
Laplace's Demon: seminar #1
Laplace's Demon: seminar #1Laplace's Demon: seminar #1
Laplace's Demon: seminar #1Christian Robert
 
Likelihood-free Design: a discussion
Likelihood-free Design: a discussionLikelihood-free Design: a discussion
Likelihood-free Design: a discussionChristian Robert
 

More from Christian Robert (20)

Asymptotics of ABC, lecture, Collège de France
Asymptotics of ABC, lecture, Collège de FranceAsymptotics of ABC, lecture, Collège de France
Asymptotics of ABC, lecture, Collège de France
 
Workshop in honour of Don Poskitt and Gael Martin
Workshop in honour of Don Poskitt and Gael MartinWorkshop in honour of Don Poskitt and Gael Martin
Workshop in honour of Don Poskitt and Gael Martin
 
discussion of ICML23.pdf
discussion of ICML23.pdfdiscussion of ICML23.pdf
discussion of ICML23.pdf
 
How many components in a mixture?
How many components in a mixture?How many components in a mixture?
How many components in a mixture?
 
restore.pdf
restore.pdfrestore.pdf
restore.pdf
 
Testing for mixtures at BNP 13
Testing for mixtures at BNP 13Testing for mixtures at BNP 13
Testing for mixtures at BNP 13
 
Inferring the number of components: dream or reality?
Inferring the number of components: dream or reality?Inferring the number of components: dream or reality?
Inferring the number of components: dream or reality?
 
CDT 22 slides.pdf
CDT 22 slides.pdfCDT 22 slides.pdf
CDT 22 slides.pdf
 
Testing for mixtures by seeking components
Testing for mixtures by seeking componentsTesting for mixtures by seeking components
Testing for mixtures by seeking components
 
discussion on Bayesian restricted likelihood
discussion on Bayesian restricted likelihooddiscussion on Bayesian restricted likelihood
discussion on Bayesian restricted likelihood
 
NCE, GANs & VAEs (and maybe BAC)
NCE, GANs & VAEs (and maybe BAC)NCE, GANs & VAEs (and maybe BAC)
NCE, GANs & VAEs (and maybe BAC)
 
ABC-Gibbs
ABC-GibbsABC-Gibbs
ABC-Gibbs
 
Coordinate sampler : A non-reversible Gibbs-like sampler
Coordinate sampler : A non-reversible Gibbs-like samplerCoordinate sampler : A non-reversible Gibbs-like sampler
Coordinate sampler : A non-reversible Gibbs-like sampler
 
eugenics and statistics
eugenics and statisticseugenics and statistics
eugenics and statistics
 
Laplace's Demon: seminar #1
Laplace's Demon: seminar #1Laplace's Demon: seminar #1
Laplace's Demon: seminar #1
 
ABC-Gibbs
ABC-GibbsABC-Gibbs
ABC-Gibbs
 
asymptotics of ABC
asymptotics of ABCasymptotics of ABC
asymptotics of ABC
 
ABC-Gibbs
ABC-GibbsABC-Gibbs
ABC-Gibbs
 
Likelihood-free Design: a discussion
Likelihood-free Design: a discussionLikelihood-free Design: a discussion
Likelihood-free Design: a discussion
 
the ABC of ABC
the ABC of ABCthe ABC of ABC
the ABC of ABC
 

RSS discussion of Girolami and Calderhead, October 13, 2010

  • 1. About discretising Hamiltonians Christian P. Robert Universit´ Paris-Dauphine and CREST e http://xianblog.wordpress.com Royal Statistical Society, October 13, 2010 Christian P. Robert About discretising Hamiltonians
  • 2. Hamiltonian dynamics Dynamic on the level sets of 1 1 H (θ, p) = −L(θ) + log{(2π)D |G(θ)|} + pT G(θ)−1 p , 2 2 where p is an auxiliary vector of dimension D, is associated with Hamilton’s pde’s ∂H ˙ ∂H (θ, p) ˙ p= (θ, p) , θ= ∂p ∂θ which preserve the potential H (θ, p) and hence the target distribution at all times t Christian P. Robert About discretising Hamiltonians
  • 3. Discretised Hamiltonian Girolami and Calderhead reproduce Hamiltonian equations within the simulation domain by discretisation via the generalised leapfrog (!) generator, [Subliminal French bashing?!] Christian P. Robert About discretising Hamiltonians
  • 4. Discretised Hamiltonian Girolami and Calderhead reproduce Hamiltonian equations within the simulation domain by discretisation via the generalised leapfrog (!) generator, but... Christian P. Robert About discretising Hamiltonians
  • 5. Discretised Hamiltonian Girolami and Calderhead reproduce Hamiltonian equations within the simulation domain by discretisation via the generalised leapfrog (!) generator, but... invariance and stability properties of the [background] continuous time process the method do not carry to the discretised version of the process [e.g., Langevin] Christian P. Robert About discretising Hamiltonians
  • 6. Discretised Hamiltonian (2) Is it useful to so painstakingly reproduce the continuous behaviour? Approximations (see R&R’s Langevin) can be corrected by a Metropolis-Hastings step, so why bother with a second level of approximation? Discretisation induces a calibration problem: how long is long enough? Convergence issues (for the MCMC algorithm) should not be impacted by inexact renderings of the continuous time process in discrete time: loss of efficiency? Christian P. Robert About discretising Hamiltonians
  • 7. An illustration Comparison of the fits of discretised Langevin diffusion sequences to the target f (x) ∝ exp(−x4 ) when using a discretisation step σ 2 = .1 and σ 2 = .0001, after the same number T = 107 of steps. 0.6 0.5 0.4 Density 0.3 0.2 0.1 0.0 −1.5 −1.0 −0.5 0.0 0.5 1.0 1.5 Christian P. Robert About discretising Hamiltonians
  • 8. An illustration Comparison of the fits of discretised Langevin diffusion sequences to the target f (x) ∝ exp(−x4 ) when using a discretisation step σ 2 = .1 and σ 2 = .0001, after the same number T = 107 of steps. 0.8 0.6 Density 0.4 0.2 0.0 −1.5 −1.0 −0.5 0.0 0.5 1.0 1.5 Christian P. Robert About discretising Hamiltonians
  • 9. An illustration Comparison of the fits of discretised Langevin diffusion sequences to the target f (x) ∝ exp(−x4 ) when using a discretisation step σ 2 = .1 and σ 2 = .0001, after the same number T = 107 of steps. 1e+05 8e+04 6e+04 time 4e+04 2e+04 0e+00 −2 −1 0 1 2 Christian P. Robert About discretising Hamiltonians
  • 10. Back on Langevin For the Langevin diffusion, the corresponding Langevin (discretised) algorithm could as well use another scale η for the gradient, rather than the one τ used for the noise Christian P. Robert About discretising Hamiltonians
  • 11. Back on Langevin For the Langevin diffusion, the corresponding Langevin (discretised) algorithm could as well use another scale η for the gradient, rather than the one τ used for the noise y = xt + η∇π(x) + τ ǫt rather than a strict Euler discretisation y = xt + τ 2 ∇π(x)/2 + τ ǫt Christian P. Robert About discretising Hamiltonians
  • 12. Back on Langevin For the Langevin diffusion, the corresponding Langevin (discretised) algorithm could as well use another scale η for the gradient, rather than the one τ used for the noise y = xt + η∇π(x) + τ ǫt rather than a strict Euler discretisation y = xt + τ 2 ∇π(x)/2 + τ ǫt A few experiments run in Robert and Casella (1999, Chap. 6, §6.5) hinted that using a scale η = τ 2 /2 could actually lead to improvements Christian P. Robert About discretising Hamiltonians
  • 13. Back on Langevin For the Langevin diffusion, the corresponding Langevin (discretised) algorithm could as well use another scale η for the gradient, rather than the one τ used for the noise y = xt + η∇π(x) + τ ǫt rather than a strict Euler discretisation y = xt + τ 2 ∇π(x)/2 + τ ǫt A few experiments run in Robert and Casella (1999, Chap. 6, §6.5) hinted that using a scale η = τ 2 /2 could actually lead to improvements Which [independent] framework should we adopt for assessing discretised diffusions? Christian P. Robert About discretising Hamiltonians