SlideShare a Scribd company logo
1 of 53
Download to read offline
Monte Carlo Methods
Frank Kienle
Senior Data Scientist
Blue Yonder (www.blue-yonder.com)
§  TexPoint fonts used in EMF.
History
J. v. Neumann and S. Ulam are commonly regarded as the founders of
the Monte Carlo method (United States Manhattan Project)
Original defined for calculating the probability of winning a card game of
solitaire
Published article 1949 by Metropolis and Ulam:
‘The Monte Carlo method’
2
Monte Carlo Example:
How to calculate with the help of Monte Carlo Simulation:
3
1.  Uniformly scatter points throughout the square (by simulation)
2.  Count the number of points lying in the circle
3.  The ratio of the point inside (N1) and the overall number of points (N2)
x
x
x
x
x
x
x
x
x
x
x
A1 = ⇧R2
A2 = (2R)2
=
⇧
4
N1
N2
=
⇧
4
⇧
Monte Carlo:
4
3.14159Π =
Example with 30 samples:
1 attempt: 3.4666
2 attempt: 2.9333
3 attempt: 3.2
….
The variance is large:
(calculated with N=1000 attempts)
à 68.4 % of all values lie within a distance of <0.29 to the true value
2
= 0.086
= 0.29
2
=
1
N
X
(Xi ⇡)2
Monte Carlo:
5
3.14159Π =
Example with 300 samples:
1 attempt: 3.213
2 attempt: 3.106
3 attempt: 3.32
….
The variance is large:
(calculated with T=1000 attempts)
à 68.4 % of all values lie within a distance of <0.093 to the true value
2
= 0.0087
= 0.093
2
=
1
N
X
(Xi ⇡)2
Monte Carlo Methods
Some samples size - some number of points –
and we try to infer something more general
Its all about an application which is called:
Inferential Statistics
6
How to solve an integral via Monte Carlo method, e.g.
Monte Carlo Approximation:
e.g. 3 random samples of x
Monte Carlo Integration
7
10 x
f(x)
I =
Z 1
0
ex
dx
x =
1
3
I =
Z 1
0
ex
dx = lim
x!0
X
ex
dx
¯I =
1
N
X
ex
with x 2 [0, 1]
How to solve an integral via Monte Carlo method, e.g.
Monte Carlo Approximation
( )
Monte Carlo Integration
8
10 x
f(x)
0
I =
Z 1
0
ex
dx I =
Z 1
0
ex
dx = lim
x!0
X
ex
dx
x =
1
N
! 0
1
N
X
ex N!1
! lim
x!0
X
ex
dx
N ! 1
Monte Carlo Methods
Its all about an application which is called:
Inferential statistics
Some samples size - some number of points - and we try to infer
something more general
Why does it work:
Random sample tends to exhibit same properties as the population from
which it is drawn.
9
10
Law of Large Numbers
For a sequence of independent, identically
distributed variable , with expectation then :
Arithmetic mean converges to the expected value
Strong law of large numbers
the sample average converges almost surely to the expected value
Xi for 1, 2, ..., N
µ = E(X)
XN =
1
N
(X1 + · · · + XN )
XN ! µ for N ! 1
Pr
⇣
lim
N!1
XN = µ
⌘
= 1
Monte Carlo Methods
Its all about an application which is called:
Inferential statistics
Why does it work:
Random sample tends to exhibit same properties as the population from
which it is drawn.
Calculations:
It is all about to calculate an expectation of a random variable
11
Expectation
A random variable with distribution
The expectation of a function of is:
discrete :
Continuous:
12
fX (x)X
g X
E(g(X)) =
X
x2X
g(x)fX (x)
E(g(X)) =
Z
x2X
g(x)fX (x)dx
Why is the expectation so useful
Solve Probabilities:
Solve Integrals:
13
Z b
a
q(x)dx = (b a)
Z b
a
q(x)
1
b a
dx
continuous random variable U with density function fU (u) = 1
b a
Z b
a
q(x)dx = (b a)E(q(U))
P(Y 2 A) = E(I{A}(Y ))
Why is the expectation so useful
Solve Probabilities:
Solve Integrals:
Discrete Sums:
14
Z b
a
q(x)dx = (b a)E(q(U))
P(Y 2 A) = E(I{A}(Y ))
X
x2A
q(x) =
1
p
X
x2A
q(x)p =
1
p
E(q(W))
W takes values in A with equal probabilityX
w2A
p = 1
Monte Carlo convergence
15
f(x) = e(0.4⇤(x 0.4)2
0.08⇤x4
)
Convergence: Uniform sampling [-5, 5]
16
Convergence: Uniform sampling [-5, 5]
17
Convergence: Uniform sampling [-5, 5]
18
Convergence: Uniform sampling [-5, 5]
19
Monte Carlo Simulation
How good is the Monte Carlo Method:
As seen the variance of the result (error) assuming different attempts can
be pretty large.
The expected variance of the Monte Carlo Simulation is of order
20
2
MC / O
✓
1
N
◆
V ar XN µ = V ar
1
N
NX
i=1
Xi
!
=
1
N
V ar(X)
Rate of convergence
The standard derivation (more intuitive number) is of order
Every further digit in precision requires 100 times more simulations!
à Very slow convergence to the correct result
21
MC / O
✓
1
p
N
◆
Convergence of Monte Carlo Integration:
Convergence of numerical integration (trapezoid rule):
22
MC / O
✓
1
p
N
◆
T / O
✓
1
N2
◆
Multidimensional Integral
Monte Carlo simulation is very effective to solve multidimensional integrals
Standard deviation for different number of samples x,y,z all independent
23
I =
Z 1
0
Z 1
0
Z 1
0
ex
ey
ez
dxdydz = e3
3e2
+ 3e 1 = 5.0732
N = 100 ! = 0.0725
N = 1000 ! = 0.0074
N = 10000 ! = 0.00067
Random sampling in the 3D-Grid
24
With only N=100 samples the result is surprisingly good
Integration in d-Dimensions
Convergence of numerical integration (trapezoid rule):
Convergence of Monte Carlo Integration:
The error is independent of the dimension
Convergence of Monte Carlo integration is for d>4 better than the classical
numerical integration
25
MC / O
✓
1
p
N
◆
T / O
✓
1
N
2
d
◆
Variance reduction method
•  The main disadvantage of the (crude) Monte Carlo method is its slow
convergence.
•  The standard deviation of the error only decreases as a square root in
terms of the required number of simulations.
•  A faster decrease of the variance could speed up the computations,
i.e. achieving a desired accuracy requires less simulation runs.
Any such modification of the (crude) Monte Carlo method is called:
variance reduction method
26
Variance reduction by sampling
27
3.14159Π =
Random sampling vs. fixed grid sampling:
N=100 samples
Random:
Uniform:
2
0.026
0.16
σ
σ
=
=
2
0.0034
0.058
σ
σ
=
=
Variance reduction methods
Antithetic Variable
Stratified Sampling
Importance Sampling
Rejection Sampling
Markov Chain Sampling
Gibbs Sampling
28
Convergence: Uniform sampling [-5, 5]
29
Barley Relevant Sampling
30
Convergence: Uniform sampling [-20, 20]
31
Importance Sampling
Idea: certain values of the input random variables in a simulation have
more impact on the parameter being estimated than others.
If these "important" values are emphasized by sampling more frequently,
then the estimator variance can be reduced.
32
Importance Sampling
33
Z
x2A
g(x)dx =
Z
x2A
g(x)
h(x)
h(x)
dx
Z
x2A
g(x)dx =
Z
x2A
g(x)
h(x)
h(x)
dx = Eh
✓
g(X)
h(X)
◆
h(x) be a density for the random variable X
Importance Sampling
Idea: certain values of the input random variables in a simulation have
more impact on the parameter being estimated than others.
34
g(xi)
h(xi)
¯I =
1
N
NX
i=1
g(Xi)
h(Xi)
xi
Variance reduction method
Implementing and adapting variance reduction methods requires quite
some effort in programming and mathematical considerations.
The gain in variance reduction should also be judged against this
additional effort.
Is it really worth using a variance reduction method in a specific situation?
35
Applications
36
37
Digital Transmission System
AWGN
channel
source
modulator
demodulator
source
encoder
channel
encoder
channel
decoder
Given a received message , ideal decoding picks a codeword to
maximize:
Monte Carlo Simulation
source
decoder
sink
38
Bit Interleaved Coded Modulation
Spatial multiplexing
§  goal is to maximize transmission rate
§  No rate loss by space coding,
only time coding by channel encoder
Source
Channel
Encoder Π
QAM
Mapper
39
Channel Model
MT = MR = 4 transmit and receive antennas
Received vector:
Quasi-static Rayleigh fading channel
§  each entry modelled as independent, complex, zero-mean, Gaussian random variable
§  H remains constant for multiple time steps
Nr of bits per transmission vector N = MT · Q
40
ML Receiver
Received vector:
Maximum Likelihood:
Optimization Problem:
ˆsML
= arg min
s
||yt Hts||2
ˆsML
= arg max
s
{P(y|s)}
yt = H · st + nt
41
Monte Carlo Method
search the nearest point, by clever sampling
ˆsML
= arg min
s
||yt Hts||2
Hsi
Each point described by:
8 antennas and 1024 QAM à280 points
Gibbs Sampling
A Markov Chain Monte Carlo algorithm
At each each step, replace the value of a variable using the distribution
conditioned on the remaining variables
1.  Initialize
2.  For
42
⌧ = 1, . . . , T :
{xi : i = 1, . . . N}
x⌧+1
2 ⇠ P(x2|x⌧
1, x⌧
3, . . . , x⌧
N )
x⌧+1
1 ⇠ P(x1|x⌧
2, x⌧
3, . . . , x⌧
N )
Gibbs Sampling: MIMO Receiver
For each step, replace the value of a variable using the distribution
conditioned on the remaining variables
1.  Initialize best linear solution (MMSE solution)
2.  For
43
⌧ = 1, . . . , T :
ˆsMMSE
=
✓
HH
H +
MT
SNR
I
◆ 1
HH
yt
(xi)⌧+1
= ln
P(xi = 0|y, s⌧
⇠xi
)
P(xi = 1|y, s⌧
⇠xi
)
Summary: Monte Carlo Methods
Monte Carlo methods are a class of computational algorithms that rely on
repeated random sampling to compute their results.
It is all about how to draw random samples from an expected distribution
Is the population we have available similar to the truth?
44
Inverse Transformation Method
Gaussian distribution
Probability Density Function
Uniformrandom
numbergenerator
§ Cumulative Distribution Function
Gaussian distribution
F(x) =
Z x
inf
f(t)dt
x = F 1
(u)
Hit-or-miss Method
Problem is not always simple to calculate
•  choose x (equally distributed)
in interval where
•  choose y (equally distributed)
In interval
•  Return x when
else don’t return a value
46
x = F 1
(u)
f(x) 6= 0
[min(f(x)), max(f(x))]
y < f(x)
Rejection Sampling
47
Summary: Monte Carlo Methods
Monte Carlo methods are a class of computational algorithms that rely on
repeated random sampling to compute their results.
It is all about how to draw random samples from an expected distribution
Is the population we have available similar to the truth?
48
49
Acceptance/Rejection Methode
Combination of Hit and Miss
and Inverse transform method
In the rejection sampling method,
samples are drawn from a simple
distribution q(z) and rejected
if they fall in the grey area between
the unnormalized distribution
p(z) and the scaled distribution
kq(z).
The resulting samples are distributed according to p(z), which is the normalized
version of p(z).
First, we generate a number z0 from the distribution q(z).
Next, we generate a number u0 from the uniform distribution over
[0, kq(z0)]. This pair of random numbers has uniform distribution under the curve of the function kq(z).
Finally, if u0 > p(z0) then the sample is rejected, otherwise u0 is retained. T
50
51
Law of Large Numbers
converges to the expected value
Weak law:
For any nonzero margin ε specified, with a sufficiently large sample there will be a very high
probability that the average of the observations will be close to the expected value, that is,
within the margin.
Strong law:
that the sample average converges almost surely to the expected value[
Xn =
1
n
(X1 + · · · + Xn)
Xn ! µ for n ! 1
lim
n!1
Pr |Xn µ| > " = 0
Pr
⇣
lim
n!1
Xn = µ
⌘
= 1
Numerical Methods
52
Big Picture
53
Statistics
Frequentist
Uses frequent measurements of a data
set or experiment. The trick is the
sampling to extract the desired
information:
Time
Sampling:
à e.g.
Nyquist
Theorem
Space
Sampling:
à e.g.
Integral,
Monte Carlo
Function
Sampling:
à e.g.
Wavelets,
Fourier
Bayesian
Theory
Takes into account all available
information and answers the question
of interest given the particular data set
Maximum Noise
Suppression
à Wiener Filter
Minimum Variance
Estimator:
à Kalman Filter
(PLL)

More Related Content

What's hot

Unit 2 monte carlo simulation
Unit 2 monte carlo simulationUnit 2 monte carlo simulation
Unit 2 monte carlo simulationDevaKumari Vijay
 
Monte carlo simulation
Monte carlo simulationMonte carlo simulation
Monte carlo simulationk1kumar
 
Introduction to Numerical Analysis
Introduction to Numerical AnalysisIntroduction to Numerical Analysis
Introduction to Numerical AnalysisMohammad Tawfik
 
Introduction to Random Walk
Introduction to Random WalkIntroduction to Random Walk
Introduction to Random WalkShuai Zhang
 
Probability distribution
Probability distributionProbability distribution
Probability distributionPunit Raut
 
Monte Carlo Methods
Monte Carlo MethodsMonte Carlo Methods
Monte Carlo MethodsJames Bell
 
Probability distribution
Probability distributionProbability distribution
Probability distributionRanjan Kumar
 
Exponential probability distribution
Exponential probability distributionExponential probability distribution
Exponential probability distributionMuhammad Bilal Tariq
 
Markov decision process
Markov decision processMarkov decision process
Markov decision processHamed Abdi
 
Introduction to MCMC methods
Introduction to MCMC methodsIntroduction to MCMC methods
Introduction to MCMC methodsChristian Robert
 
Time Series - Auto Regressive Models
Time Series - Auto Regressive ModelsTime Series - Auto Regressive Models
Time Series - Auto Regressive ModelsBhaskar T
 
Metaheuristic Algorithms: A Critical Analysis
Metaheuristic Algorithms: A Critical AnalysisMetaheuristic Algorithms: A Critical Analysis
Metaheuristic Algorithms: A Critical AnalysisXin-She Yang
 
Introduction to Optimization.ppt
Introduction to Optimization.pptIntroduction to Optimization.ppt
Introduction to Optimization.pptMonarjayMalbog1
 
Nature-Inspired Optimization Algorithms
Nature-Inspired Optimization Algorithms Nature-Inspired Optimization Algorithms
Nature-Inspired Optimization Algorithms Xin-She Yang
 

What's hot (20)

How to perform a Monte Carlo simulation
How to perform a Monte Carlo simulation How to perform a Monte Carlo simulation
How to perform a Monte Carlo simulation
 
Unit 2 monte carlo simulation
Unit 2 monte carlo simulationUnit 2 monte carlo simulation
Unit 2 monte carlo simulation
 
Monte carlo simulation
Monte carlo simulationMonte carlo simulation
Monte carlo simulation
 
Introduction to Numerical Analysis
Introduction to Numerical AnalysisIntroduction to Numerical Analysis
Introduction to Numerical Analysis
 
Introduction to Random Walk
Introduction to Random WalkIntroduction to Random Walk
Introduction to Random Walk
 
Random number generation
Random number generationRandom number generation
Random number generation
 
Probability distribution
Probability distributionProbability distribution
Probability distribution
 
Monte Carlo Methods
Monte Carlo MethodsMonte Carlo Methods
Monte Carlo Methods
 
Probability distribution
Probability distributionProbability distribution
Probability distribution
 
Exponential probability distribution
Exponential probability distributionExponential probability distribution
Exponential probability distribution
 
Gradient descent method
Gradient descent methodGradient descent method
Gradient descent method
 
AR model
AR modelAR model
AR model
 
Markov decision process
Markov decision processMarkov decision process
Markov decision process
 
Introduction to MCMC methods
Introduction to MCMC methodsIntroduction to MCMC methods
Introduction to MCMC methods
 
Time Series - Auto Regressive Models
Time Series - Auto Regressive ModelsTime Series - Auto Regressive Models
Time Series - Auto Regressive Models
 
Metaheuristic Algorithms: A Critical Analysis
Metaheuristic Algorithms: A Critical AnalysisMetaheuristic Algorithms: A Critical Analysis
Metaheuristic Algorithms: A Critical Analysis
 
Introduction to Optimization.ppt
Introduction to Optimization.pptIntroduction to Optimization.ppt
Introduction to Optimization.ppt
 
Nature-Inspired Optimization Algorithms
Nature-Inspired Optimization Algorithms Nature-Inspired Optimization Algorithms
Nature-Inspired Optimization Algorithms
 
Introduction to optimization Problems
Introduction to optimization ProblemsIntroduction to optimization Problems
Introduction to optimization Problems
 
Law of large numbers
Law of large numbersLaw of large numbers
Law of large numbers
 

Viewers also liked

Monte Carlo Statistical Methods
Monte Carlo Statistical MethodsMonte Carlo Statistical Methods
Monte Carlo Statistical MethodsChristian Robert
 
Lecture summary: architectures for baseband signal processing of wireless com...
Lecture summary: architectures for baseband signal processing of wireless com...Lecture summary: architectures for baseband signal processing of wireless com...
Lecture summary: architectures for baseband signal processing of wireless com...Frank Kienle
 
Sawinder Pal Kaur PhD Thesis
Sawinder Pal Kaur PhD ThesisSawinder Pal Kaur PhD Thesis
Sawinder Pal Kaur PhD ThesisSawinder Pal Kaur
 
Covariance Matrix Adaptation Evolution Strategy (CMA-ES)
Covariance Matrix Adaptation Evolution Strategy (CMA-ES)Covariance Matrix Adaptation Evolution Strategy (CMA-ES)
Covariance Matrix Adaptation Evolution Strategy (CMA-ES)Hossein Abedi
 
Monte carlo integration, importance sampling, basic idea of markov chain mont...
Monte carlo integration, importance sampling, basic idea of markov chain mont...Monte carlo integration, importance sampling, basic idea of markov chain mont...
Monte carlo integration, importance sampling, basic idea of markov chain mont...BIZIMANA Appolinaire
 
data scientist the sexiest job of the 21st century
data scientist the sexiest job of the 21st centurydata scientist the sexiest job of the 21st century
data scientist the sexiest job of the 21st centuryFrank Kienle
 
law of large number and central limit theorem
 law of large number and central limit theorem law of large number and central limit theorem
law of large number and central limit theoremlovemucheca
 
Applications to Central Limit Theorem and Law of Large Numbers
Applications to Central Limit Theorem and Law of Large NumbersApplications to Central Limit Theorem and Law of Large Numbers
Applications to Central Limit Theorem and Law of Large NumbersUniversity of Salerno
 
Monte carlo presentation for analysis of business growth
Monte carlo presentation for analysis of business growthMonte carlo presentation for analysis of business growth
Monte carlo presentation for analysis of business growthAsif Anik
 
Monte carlo simulation
Monte carlo simulationMonte carlo simulation
Monte carlo simulationAnurag Jaiswal
 
On tap kinh te luong co ban
On tap kinh te luong co banOn tap kinh te luong co ban
On tap kinh te luong co banCam Lan Nguyen
 
Monte Carlo Statistical Methods
Monte Carlo Statistical MethodsMonte Carlo Statistical Methods
Monte Carlo Statistical MethodsChristian Robert
 
OWASP Khartoum Cyber Security Session
OWASP Khartoum Cyber Security SessionOWASP Khartoum Cyber Security Session
OWASP Khartoum Cyber Security SessionOWASP Khartoum
 
Lecture1
Lecture1Lecture1
Lecture1rjaeh
 
Monte Carlo Simulation
Monte Carlo SimulationMonte Carlo Simulation
Monte Carlo SimulationAyman Hassan
 
Communication skills in english
Communication skills in englishCommunication skills in english
Communication skills in englishAqib Memon
 
Monte Carlo G P U Jan2010
Monte  Carlo  G P U  Jan2010Monte  Carlo  G P U  Jan2010
Monte Carlo G P U Jan2010John Holden
 
Access lesson 06 Integrating Access
Access lesson 06  Integrating AccessAccess lesson 06  Integrating Access
Access lesson 06 Integrating AccessAram SE
 

Viewers also liked (20)

Monte Carlo Statistical Methods
Monte Carlo Statistical MethodsMonte Carlo Statistical Methods
Monte Carlo Statistical Methods
 
Lecture summary: architectures for baseband signal processing of wireless com...
Lecture summary: architectures for baseband signal processing of wireless com...Lecture summary: architectures for baseband signal processing of wireless com...
Lecture summary: architectures for baseband signal processing of wireless com...
 
Sawinder Pal Kaur PhD Thesis
Sawinder Pal Kaur PhD ThesisSawinder Pal Kaur PhD Thesis
Sawinder Pal Kaur PhD Thesis
 
Covariance Matrix Adaptation Evolution Strategy (CMA-ES)
Covariance Matrix Adaptation Evolution Strategy (CMA-ES)Covariance Matrix Adaptation Evolution Strategy (CMA-ES)
Covariance Matrix Adaptation Evolution Strategy (CMA-ES)
 
Monte carlo integration, importance sampling, basic idea of markov chain mont...
Monte carlo integration, importance sampling, basic idea of markov chain mont...Monte carlo integration, importance sampling, basic idea of markov chain mont...
Monte carlo integration, importance sampling, basic idea of markov chain mont...
 
Covariance
CovarianceCovariance
Covariance
 
data scientist the sexiest job of the 21st century
data scientist the sexiest job of the 21st centurydata scientist the sexiest job of the 21st century
data scientist the sexiest job of the 21st century
 
law of large number and central limit theorem
 law of large number and central limit theorem law of large number and central limit theorem
law of large number and central limit theorem
 
Simulation methods finance_1
Simulation methods finance_1Simulation methods finance_1
Simulation methods finance_1
 
Applications to Central Limit Theorem and Law of Large Numbers
Applications to Central Limit Theorem and Law of Large NumbersApplications to Central Limit Theorem and Law of Large Numbers
Applications to Central Limit Theorem and Law of Large Numbers
 
Monte carlo presentation for analysis of business growth
Monte carlo presentation for analysis of business growthMonte carlo presentation for analysis of business growth
Monte carlo presentation for analysis of business growth
 
Monte carlo simulation
Monte carlo simulationMonte carlo simulation
Monte carlo simulation
 
On tap kinh te luong co ban
On tap kinh te luong co banOn tap kinh te luong co ban
On tap kinh te luong co ban
 
Monte Carlo Statistical Methods
Monte Carlo Statistical MethodsMonte Carlo Statistical Methods
Monte Carlo Statistical Methods
 
OWASP Khartoum Cyber Security Session
OWASP Khartoum Cyber Security SessionOWASP Khartoum Cyber Security Session
OWASP Khartoum Cyber Security Session
 
Lecture1
Lecture1Lecture1
Lecture1
 
Monte Carlo Simulation
Monte Carlo SimulationMonte Carlo Simulation
Monte Carlo Simulation
 
Communication skills in english
Communication skills in englishCommunication skills in english
Communication skills in english
 
Monte Carlo G P U Jan2010
Monte  Carlo  G P U  Jan2010Monte  Carlo  G P U  Jan2010
Monte Carlo G P U Jan2010
 
Access lesson 06 Integrating Access
Access lesson 06  Integrating AccessAccess lesson 06  Integrating Access
Access lesson 06 Integrating Access
 

Similar to Lecture: Monte Carlo Methods

SPDE presentation 2012
SPDE presentation 2012SPDE presentation 2012
SPDE presentation 2012Zheng Mengdi
 
ORMR_Monte Carlo Method.pdf
ORMR_Monte Carlo Method.pdfORMR_Monte Carlo Method.pdf
ORMR_Monte Carlo Method.pdfSanjayBalu7
 
Unbiased Markov chain Monte Carlo
Unbiased Markov chain Monte CarloUnbiased Markov chain Monte Carlo
Unbiased Markov chain Monte CarloJeremyHeng10
 
Unbiased Markov chain Monte Carlo
Unbiased Markov chain Monte CarloUnbiased Markov chain Monte Carlo
Unbiased Markov chain Monte CarloJeremyHeng10
 
2012 mdsp pr04 monte carlo
2012 mdsp pr04 monte carlo2012 mdsp pr04 monte carlo
2012 mdsp pr04 monte carlonozomuhamada
 
discrete and continuous probability distributions pptbecdoms-120223034321-php...
discrete and continuous probability distributions pptbecdoms-120223034321-php...discrete and continuous probability distributions pptbecdoms-120223034321-php...
discrete and continuous probability distributions pptbecdoms-120223034321-php...novrain1
 
Litvinenko low-rank kriging +FFT poster
Litvinenko low-rank kriging +FFT  posterLitvinenko low-rank kriging +FFT  poster
Litvinenko low-rank kriging +FFT posterAlexander Litvinenko
 
Efficient Analysis of high-dimensional data in tensor formats
Efficient Analysis of high-dimensional data in tensor formatsEfficient Analysis of high-dimensional data in tensor formats
Efficient Analysis of high-dimensional data in tensor formatsAlexander Litvinenko
 
Sequential Monte Carlo algorithms for agent-based models of disease transmission
Sequential Monte Carlo algorithms for agent-based models of disease transmissionSequential Monte Carlo algorithms for agent-based models of disease transmission
Sequential Monte Carlo algorithms for agent-based models of disease transmissionJeremyHeng10
 
Markov chain Monte Carlo methods and some attempts at parallelizing them
Markov chain Monte Carlo methods and some attempts at parallelizing themMarkov chain Monte Carlo methods and some attempts at parallelizing them
Markov chain Monte Carlo methods and some attempts at parallelizing themPierre Jacob
 
Phase-Type Distributions for Finite Interacting Particle Systems
Phase-Type Distributions for Finite Interacting Particle SystemsPhase-Type Distributions for Finite Interacting Particle Systems
Phase-Type Distributions for Finite Interacting Particle SystemsStefan Eng
 
H2O World - Consensus Optimization and Machine Learning - Stephen Boyd
H2O World - Consensus Optimization and Machine Learning - Stephen BoydH2O World - Consensus Optimization and Machine Learning - Stephen Boyd
H2O World - Consensus Optimization and Machine Learning - Stephen BoydSri Ambati
 
Statistical computing2
Statistical computing2Statistical computing2
Statistical computing2Padma Metta
 

Similar to Lecture: Monte Carlo Methods (20)

A bit about мcmc
A bit about мcmcA bit about мcmc
A bit about мcmc
 
PhysicsSIG2008-01-Seneviratne
PhysicsSIG2008-01-SeneviratnePhysicsSIG2008-01-Seneviratne
PhysicsSIG2008-01-Seneviratne
 
SPDE presentation 2012
SPDE presentation 2012SPDE presentation 2012
SPDE presentation 2012
 
Optimization tutorial
Optimization tutorialOptimization tutorial
Optimization tutorial
 
ORMR_Monte Carlo Method.pdf
ORMR_Monte Carlo Method.pdfORMR_Monte Carlo Method.pdf
ORMR_Monte Carlo Method.pdf
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
 
Unbiased Markov chain Monte Carlo
Unbiased Markov chain Monte CarloUnbiased Markov chain Monte Carlo
Unbiased Markov chain Monte Carlo
 
Unbiased Markov chain Monte Carlo
Unbiased Markov chain Monte CarloUnbiased Markov chain Monte Carlo
Unbiased Markov chain Monte Carlo
 
2012 mdsp pr04 monte carlo
2012 mdsp pr04 monte carlo2012 mdsp pr04 monte carlo
2012 mdsp pr04 monte carlo
 
Montecarlophd
MontecarlophdMontecarlophd
Montecarlophd
 
Presentation.pdf
Presentation.pdfPresentation.pdf
Presentation.pdf
 
discrete and continuous probability distributions pptbecdoms-120223034321-php...
discrete and continuous probability distributions pptbecdoms-120223034321-php...discrete and continuous probability distributions pptbecdoms-120223034321-php...
discrete and continuous probability distributions pptbecdoms-120223034321-php...
 
Litvinenko low-rank kriging +FFT poster
Litvinenko low-rank kriging +FFT  posterLitvinenko low-rank kriging +FFT  poster
Litvinenko low-rank kriging +FFT poster
 
Efficient Analysis of high-dimensional data in tensor formats
Efficient Analysis of high-dimensional data in tensor formatsEfficient Analysis of high-dimensional data in tensor formats
Efficient Analysis of high-dimensional data in tensor formats
 
Sequential Monte Carlo algorithms for agent-based models of disease transmission
Sequential Monte Carlo algorithms for agent-based models of disease transmissionSequential Monte Carlo algorithms for agent-based models of disease transmission
Sequential Monte Carlo algorithms for agent-based models of disease transmission
 
Markov chain Monte Carlo methods and some attempts at parallelizing them
Markov chain Monte Carlo methods and some attempts at parallelizing themMarkov chain Monte Carlo methods and some attempts at parallelizing them
Markov chain Monte Carlo methods and some attempts at parallelizing them
 
Input analysis
Input analysisInput analysis
Input analysis
 
Phase-Type Distributions for Finite Interacting Particle Systems
Phase-Type Distributions for Finite Interacting Particle SystemsPhase-Type Distributions for Finite Interacting Particle Systems
Phase-Type Distributions for Finite Interacting Particle Systems
 
H2O World - Consensus Optimization and Machine Learning - Stephen Boyd
H2O World - Consensus Optimization and Machine Learning - Stephen BoydH2O World - Consensus Optimization and Machine Learning - Stephen Boyd
H2O World - Consensus Optimization and Machine Learning - Stephen Boyd
 
Statistical computing2
Statistical computing2Statistical computing2
Statistical computing2
 

More from Frank Kienle

Introduction Big Data
Introduction Big DataIntroduction Big Data
Introduction Big DataFrank Kienle
 
Enterprise Data Science Introduction
Enterprise Data Science IntroductionEnterprise Data Science Introduction
Enterprise Data Science IntroductionFrank Kienle
 
AI for good summary
AI for good summaryAI for good summary
AI for good summaryFrank Kienle
 
DevOps - Introduction to data science
DevOps - Introduction to data scienceDevOps - Introduction to data science
DevOps - Introduction to data scienceFrank Kienle
 
Data Bases - Introduction to data science
Data Bases - Introduction to data scienceData Bases - Introduction to data science
Data Bases - Introduction to data scienceFrank Kienle
 
Machine Learning part 3 - Introduction to data science
Machine Learning part 3 - Introduction to data science Machine Learning part 3 - Introduction to data science
Machine Learning part 3 - Introduction to data science Frank Kienle
 
Machine Learning part 2 - Introduction to Data Science
Machine Learning part 2 -  Introduction to Data Science Machine Learning part 2 -  Introduction to Data Science
Machine Learning part 2 - Introduction to Data Science Frank Kienle
 
Machine Learning part1 - Introduction to Data Science
Machine Learning part1 - Introduction to Data Science Machine Learning part1 - Introduction to Data Science
Machine Learning part1 - Introduction to Data Science Frank Kienle
 
Business Models - Introduction to Data Science
Business Models -  Introduction to Data ScienceBusiness Models -  Introduction to Data Science
Business Models - Introduction to Data ScienceFrank Kienle
 
Data Science Lecture: Overview and Information Collateral
Data Science Lecture: Overview and Information CollateralData Science Lecture: Overview and Information Collateral
Data Science Lecture: Overview and Information CollateralFrank Kienle
 

More from Frank Kienle (10)

Introduction Big Data
Introduction Big DataIntroduction Big Data
Introduction Big Data
 
Enterprise Data Science Introduction
Enterprise Data Science IntroductionEnterprise Data Science Introduction
Enterprise Data Science Introduction
 
AI for good summary
AI for good summaryAI for good summary
AI for good summary
 
DevOps - Introduction to data science
DevOps - Introduction to data scienceDevOps - Introduction to data science
DevOps - Introduction to data science
 
Data Bases - Introduction to data science
Data Bases - Introduction to data scienceData Bases - Introduction to data science
Data Bases - Introduction to data science
 
Machine Learning part 3 - Introduction to data science
Machine Learning part 3 - Introduction to data science Machine Learning part 3 - Introduction to data science
Machine Learning part 3 - Introduction to data science
 
Machine Learning part 2 - Introduction to Data Science
Machine Learning part 2 -  Introduction to Data Science Machine Learning part 2 -  Introduction to Data Science
Machine Learning part 2 - Introduction to Data Science
 
Machine Learning part1 - Introduction to Data Science
Machine Learning part1 - Introduction to Data Science Machine Learning part1 - Introduction to Data Science
Machine Learning part1 - Introduction to Data Science
 
Business Models - Introduction to Data Science
Business Models -  Introduction to Data ScienceBusiness Models -  Introduction to Data Science
Business Models - Introduction to Data Science
 
Data Science Lecture: Overview and Information Collateral
Data Science Lecture: Overview and Information CollateralData Science Lecture: Overview and Information Collateral
Data Science Lecture: Overview and Information Collateral
 

Recently uploaded

YourView Panel Book.pptx YourView Panel Book.
YourView Panel Book.pptx YourView Panel Book.YourView Panel Book.pptx YourView Panel Book.
YourView Panel Book.pptx YourView Panel Book.JasonViviers2
 
The Universal GTM - how we design GTM and dataLayer
The Universal GTM - how we design GTM and dataLayerThe Universal GTM - how we design GTM and dataLayer
The Universal GTM - how we design GTM and dataLayerPavel Šabatka
 
SFBA Splunk Usergroup meeting March 13, 2024
SFBA Splunk Usergroup meeting March 13, 2024SFBA Splunk Usergroup meeting March 13, 2024
SFBA Splunk Usergroup meeting March 13, 2024Becky Burwell
 
Cash Is Still King: ATM market research '2023
Cash Is Still King: ATM market research '2023Cash Is Still King: ATM market research '2023
Cash Is Still King: ATM market research '2023Vladislav Solodkiy
 
ChistaDATA Real-Time DATA Analytics Infrastructure
ChistaDATA Real-Time DATA Analytics InfrastructureChistaDATA Real-Time DATA Analytics Infrastructure
ChistaDATA Real-Time DATA Analytics Infrastructuresonikadigital1
 
Persuasive E-commerce, Our Biased Brain @ Bikkeldag 2024
Persuasive E-commerce, Our Biased Brain @ Bikkeldag 2024Persuasive E-commerce, Our Biased Brain @ Bikkeldag 2024
Persuasive E-commerce, Our Biased Brain @ Bikkeldag 2024Guido X Jansen
 
5 Ds to Define Data Archiving Best Practices
5 Ds to Define Data Archiving Best Practices5 Ds to Define Data Archiving Best Practices
5 Ds to Define Data Archiving Best PracticesDataArchiva
 
MEASURES OF DISPERSION I BSc Botany .ppt
MEASURES OF DISPERSION I BSc Botany .pptMEASURES OF DISPERSION I BSc Botany .ppt
MEASURES OF DISPERSION I BSc Botany .pptaigil2
 
Strategic CX: A Deep Dive into Voice of the Customer Insights for Clarity
Strategic CX: A Deep Dive into Voice of the Customer Insights for ClarityStrategic CX: A Deep Dive into Voice of the Customer Insights for Clarity
Strategic CX: A Deep Dive into Voice of the Customer Insights for ClarityAggregage
 
CI, CD -Tools to integrate without manual intervention
CI, CD -Tools to integrate without manual interventionCI, CD -Tools to integrate without manual intervention
CI, CD -Tools to integrate without manual interventionajayrajaganeshkayala
 
How is Real-Time Analytics Different from Traditional OLAP?
How is Real-Time Analytics Different from Traditional OLAP?How is Real-Time Analytics Different from Traditional OLAP?
How is Real-Time Analytics Different from Traditional OLAP?sonikadigital1
 
Mapping the pubmed data under different suptopics using NLP.pptx
Mapping the pubmed data under different suptopics using NLP.pptxMapping the pubmed data under different suptopics using NLP.pptx
Mapping the pubmed data under different suptopics using NLP.pptxVenkatasubramani13
 
Elements of language learning - an analysis of how different elements of lang...
Elements of language learning - an analysis of how different elements of lang...Elements of language learning - an analysis of how different elements of lang...
Elements of language learning - an analysis of how different elements of lang...PrithaVashisht1
 
Virtuosoft SmartSync Product Introduction
Virtuosoft SmartSync Product IntroductionVirtuosoft SmartSync Product Introduction
Virtuosoft SmartSync Product Introductionsanjaymuralee1
 
AI for Sustainable Development Goals (SDGs)
AI for Sustainable Development Goals (SDGs)AI for Sustainable Development Goals (SDGs)
AI for Sustainable Development Goals (SDGs)Data & Analytics Magazin
 
Master's Thesis - Data Science - Presentation
Master's Thesis - Data Science - PresentationMaster's Thesis - Data Science - Presentation
Master's Thesis - Data Science - PresentationGiorgio Carbone
 
TINJUAN PEMROSESAN TRANSAKSI DAN ERP.pptx
TINJUAN PEMROSESAN TRANSAKSI DAN ERP.pptxTINJUAN PEMROSESAN TRANSAKSI DAN ERP.pptx
TINJUAN PEMROSESAN TRANSAKSI DAN ERP.pptxDwiAyuSitiHartinah
 

Recently uploaded (17)

YourView Panel Book.pptx YourView Panel Book.
YourView Panel Book.pptx YourView Panel Book.YourView Panel Book.pptx YourView Panel Book.
YourView Panel Book.pptx YourView Panel Book.
 
The Universal GTM - how we design GTM and dataLayer
The Universal GTM - how we design GTM and dataLayerThe Universal GTM - how we design GTM and dataLayer
The Universal GTM - how we design GTM and dataLayer
 
SFBA Splunk Usergroup meeting March 13, 2024
SFBA Splunk Usergroup meeting March 13, 2024SFBA Splunk Usergroup meeting March 13, 2024
SFBA Splunk Usergroup meeting March 13, 2024
 
Cash Is Still King: ATM market research '2023
Cash Is Still King: ATM market research '2023Cash Is Still King: ATM market research '2023
Cash Is Still King: ATM market research '2023
 
ChistaDATA Real-Time DATA Analytics Infrastructure
ChistaDATA Real-Time DATA Analytics InfrastructureChistaDATA Real-Time DATA Analytics Infrastructure
ChistaDATA Real-Time DATA Analytics Infrastructure
 
Persuasive E-commerce, Our Biased Brain @ Bikkeldag 2024
Persuasive E-commerce, Our Biased Brain @ Bikkeldag 2024Persuasive E-commerce, Our Biased Brain @ Bikkeldag 2024
Persuasive E-commerce, Our Biased Brain @ Bikkeldag 2024
 
5 Ds to Define Data Archiving Best Practices
5 Ds to Define Data Archiving Best Practices5 Ds to Define Data Archiving Best Practices
5 Ds to Define Data Archiving Best Practices
 
MEASURES OF DISPERSION I BSc Botany .ppt
MEASURES OF DISPERSION I BSc Botany .pptMEASURES OF DISPERSION I BSc Botany .ppt
MEASURES OF DISPERSION I BSc Botany .ppt
 
Strategic CX: A Deep Dive into Voice of the Customer Insights for Clarity
Strategic CX: A Deep Dive into Voice of the Customer Insights for ClarityStrategic CX: A Deep Dive into Voice of the Customer Insights for Clarity
Strategic CX: A Deep Dive into Voice of the Customer Insights for Clarity
 
CI, CD -Tools to integrate without manual intervention
CI, CD -Tools to integrate without manual interventionCI, CD -Tools to integrate without manual intervention
CI, CD -Tools to integrate without manual intervention
 
How is Real-Time Analytics Different from Traditional OLAP?
How is Real-Time Analytics Different from Traditional OLAP?How is Real-Time Analytics Different from Traditional OLAP?
How is Real-Time Analytics Different from Traditional OLAP?
 
Mapping the pubmed data under different suptopics using NLP.pptx
Mapping the pubmed data under different suptopics using NLP.pptxMapping the pubmed data under different suptopics using NLP.pptx
Mapping the pubmed data under different suptopics using NLP.pptx
 
Elements of language learning - an analysis of how different elements of lang...
Elements of language learning - an analysis of how different elements of lang...Elements of language learning - an analysis of how different elements of lang...
Elements of language learning - an analysis of how different elements of lang...
 
Virtuosoft SmartSync Product Introduction
Virtuosoft SmartSync Product IntroductionVirtuosoft SmartSync Product Introduction
Virtuosoft SmartSync Product Introduction
 
AI for Sustainable Development Goals (SDGs)
AI for Sustainable Development Goals (SDGs)AI for Sustainable Development Goals (SDGs)
AI for Sustainable Development Goals (SDGs)
 
Master's Thesis - Data Science - Presentation
Master's Thesis - Data Science - PresentationMaster's Thesis - Data Science - Presentation
Master's Thesis - Data Science - Presentation
 
TINJUAN PEMROSESAN TRANSAKSI DAN ERP.pptx
TINJUAN PEMROSESAN TRANSAKSI DAN ERP.pptxTINJUAN PEMROSESAN TRANSAKSI DAN ERP.pptx
TINJUAN PEMROSESAN TRANSAKSI DAN ERP.pptx
 

Lecture: Monte Carlo Methods

  • 1. Monte Carlo Methods Frank Kienle Senior Data Scientist Blue Yonder (www.blue-yonder.com) §  TexPoint fonts used in EMF.
  • 2. History J. v. Neumann and S. Ulam are commonly regarded as the founders of the Monte Carlo method (United States Manhattan Project) Original defined for calculating the probability of winning a card game of solitaire Published article 1949 by Metropolis and Ulam: ‘The Monte Carlo method’ 2
  • 3. Monte Carlo Example: How to calculate with the help of Monte Carlo Simulation: 3 1.  Uniformly scatter points throughout the square (by simulation) 2.  Count the number of points lying in the circle 3.  The ratio of the point inside (N1) and the overall number of points (N2) x x x x x x x x x x x A1 = ⇧R2 A2 = (2R)2 = ⇧ 4 N1 N2 = ⇧ 4 ⇧
  • 4. Monte Carlo: 4 3.14159Π = Example with 30 samples: 1 attempt: 3.4666 2 attempt: 2.9333 3 attempt: 3.2 …. The variance is large: (calculated with N=1000 attempts) à 68.4 % of all values lie within a distance of <0.29 to the true value 2 = 0.086 = 0.29 2 = 1 N X (Xi ⇡)2
  • 5. Monte Carlo: 5 3.14159Π = Example with 300 samples: 1 attempt: 3.213 2 attempt: 3.106 3 attempt: 3.32 …. The variance is large: (calculated with T=1000 attempts) à 68.4 % of all values lie within a distance of <0.093 to the true value 2 = 0.0087 = 0.093 2 = 1 N X (Xi ⇡)2
  • 6. Monte Carlo Methods Some samples size - some number of points – and we try to infer something more general Its all about an application which is called: Inferential Statistics 6
  • 7. How to solve an integral via Monte Carlo method, e.g. Monte Carlo Approximation: e.g. 3 random samples of x Monte Carlo Integration 7 10 x f(x) I = Z 1 0 ex dx x = 1 3 I = Z 1 0 ex dx = lim x!0 X ex dx ¯I = 1 N X ex with x 2 [0, 1]
  • 8. How to solve an integral via Monte Carlo method, e.g. Monte Carlo Approximation ( ) Monte Carlo Integration 8 10 x f(x) 0 I = Z 1 0 ex dx I = Z 1 0 ex dx = lim x!0 X ex dx x = 1 N ! 0 1 N X ex N!1 ! lim x!0 X ex dx N ! 1
  • 9. Monte Carlo Methods Its all about an application which is called: Inferential statistics Some samples size - some number of points - and we try to infer something more general Why does it work: Random sample tends to exhibit same properties as the population from which it is drawn. 9
  • 10. 10 Law of Large Numbers For a sequence of independent, identically distributed variable , with expectation then : Arithmetic mean converges to the expected value Strong law of large numbers the sample average converges almost surely to the expected value Xi for 1, 2, ..., N µ = E(X) XN = 1 N (X1 + · · · + XN ) XN ! µ for N ! 1 Pr ⇣ lim N!1 XN = µ ⌘ = 1
  • 11. Monte Carlo Methods Its all about an application which is called: Inferential statistics Why does it work: Random sample tends to exhibit same properties as the population from which it is drawn. Calculations: It is all about to calculate an expectation of a random variable 11
  • 12. Expectation A random variable with distribution The expectation of a function of is: discrete : Continuous: 12 fX (x)X g X E(g(X)) = X x2X g(x)fX (x) E(g(X)) = Z x2X g(x)fX (x)dx
  • 13. Why is the expectation so useful Solve Probabilities: Solve Integrals: 13 Z b a q(x)dx = (b a) Z b a q(x) 1 b a dx continuous random variable U with density function fU (u) = 1 b a Z b a q(x)dx = (b a)E(q(U)) P(Y 2 A) = E(I{A}(Y ))
  • 14. Why is the expectation so useful Solve Probabilities: Solve Integrals: Discrete Sums: 14 Z b a q(x)dx = (b a)E(q(U)) P(Y 2 A) = E(I{A}(Y )) X x2A q(x) = 1 p X x2A q(x)p = 1 p E(q(W)) W takes values in A with equal probabilityX w2A p = 1
  • 15. Monte Carlo convergence 15 f(x) = e(0.4⇤(x 0.4)2 0.08⇤x4 )
  • 20. Monte Carlo Simulation How good is the Monte Carlo Method: As seen the variance of the result (error) assuming different attempts can be pretty large. The expected variance of the Monte Carlo Simulation is of order 20 2 MC / O ✓ 1 N ◆ V ar XN µ = V ar 1 N NX i=1 Xi ! = 1 N V ar(X)
  • 21. Rate of convergence The standard derivation (more intuitive number) is of order Every further digit in precision requires 100 times more simulations! à Very slow convergence to the correct result 21 MC / O ✓ 1 p N ◆
  • 22. Convergence of Monte Carlo Integration: Convergence of numerical integration (trapezoid rule): 22 MC / O ✓ 1 p N ◆ T / O ✓ 1 N2 ◆
  • 23. Multidimensional Integral Monte Carlo simulation is very effective to solve multidimensional integrals Standard deviation for different number of samples x,y,z all independent 23 I = Z 1 0 Z 1 0 Z 1 0 ex ey ez dxdydz = e3 3e2 + 3e 1 = 5.0732 N = 100 ! = 0.0725 N = 1000 ! = 0.0074 N = 10000 ! = 0.00067
  • 24. Random sampling in the 3D-Grid 24 With only N=100 samples the result is surprisingly good
  • 25. Integration in d-Dimensions Convergence of numerical integration (trapezoid rule): Convergence of Monte Carlo Integration: The error is independent of the dimension Convergence of Monte Carlo integration is for d>4 better than the classical numerical integration 25 MC / O ✓ 1 p N ◆ T / O ✓ 1 N 2 d ◆
  • 26. Variance reduction method •  The main disadvantage of the (crude) Monte Carlo method is its slow convergence. •  The standard deviation of the error only decreases as a square root in terms of the required number of simulations. •  A faster decrease of the variance could speed up the computations, i.e. achieving a desired accuracy requires less simulation runs. Any such modification of the (crude) Monte Carlo method is called: variance reduction method 26
  • 27. Variance reduction by sampling 27 3.14159Π = Random sampling vs. fixed grid sampling: N=100 samples Random: Uniform: 2 0.026 0.16 σ σ = = 2 0.0034 0.058 σ σ = =
  • 28. Variance reduction methods Antithetic Variable Stratified Sampling Importance Sampling Rejection Sampling Markov Chain Sampling Gibbs Sampling 28
  • 32. Importance Sampling Idea: certain values of the input random variables in a simulation have more impact on the parameter being estimated than others. If these "important" values are emphasized by sampling more frequently, then the estimator variance can be reduced. 32
  • 33. Importance Sampling 33 Z x2A g(x)dx = Z x2A g(x) h(x) h(x) dx Z x2A g(x)dx = Z x2A g(x) h(x) h(x) dx = Eh ✓ g(X) h(X) ◆ h(x) be a density for the random variable X
  • 34. Importance Sampling Idea: certain values of the input random variables in a simulation have more impact on the parameter being estimated than others. 34 g(xi) h(xi) ¯I = 1 N NX i=1 g(Xi) h(Xi) xi
  • 35. Variance reduction method Implementing and adapting variance reduction methods requires quite some effort in programming and mathematical considerations. The gain in variance reduction should also be judged against this additional effort. Is it really worth using a variance reduction method in a specific situation? 35
  • 37. 37 Digital Transmission System AWGN channel source modulator demodulator source encoder channel encoder channel decoder Given a received message , ideal decoding picks a codeword to maximize: Monte Carlo Simulation source decoder sink
  • 38. 38 Bit Interleaved Coded Modulation Spatial multiplexing §  goal is to maximize transmission rate §  No rate loss by space coding, only time coding by channel encoder Source Channel Encoder Π QAM Mapper
  • 39. 39 Channel Model MT = MR = 4 transmit and receive antennas Received vector: Quasi-static Rayleigh fading channel §  each entry modelled as independent, complex, zero-mean, Gaussian random variable §  H remains constant for multiple time steps Nr of bits per transmission vector N = MT · Q
  • 40. 40 ML Receiver Received vector: Maximum Likelihood: Optimization Problem: ˆsML = arg min s ||yt Hts||2 ˆsML = arg max s {P(y|s)} yt = H · st + nt
  • 41. 41 Monte Carlo Method search the nearest point, by clever sampling ˆsML = arg min s ||yt Hts||2 Hsi Each point described by: 8 antennas and 1024 QAM à280 points
  • 42. Gibbs Sampling A Markov Chain Monte Carlo algorithm At each each step, replace the value of a variable using the distribution conditioned on the remaining variables 1.  Initialize 2.  For 42 ⌧ = 1, . . . , T : {xi : i = 1, . . . N} x⌧+1 2 ⇠ P(x2|x⌧ 1, x⌧ 3, . . . , x⌧ N ) x⌧+1 1 ⇠ P(x1|x⌧ 2, x⌧ 3, . . . , x⌧ N )
  • 43. Gibbs Sampling: MIMO Receiver For each step, replace the value of a variable using the distribution conditioned on the remaining variables 1.  Initialize best linear solution (MMSE solution) 2.  For 43 ⌧ = 1, . . . , T : ˆsMMSE = ✓ HH H + MT SNR I ◆ 1 HH yt (xi)⌧+1 = ln P(xi = 0|y, s⌧ ⇠xi ) P(xi = 1|y, s⌧ ⇠xi )
  • 44. Summary: Monte Carlo Methods Monte Carlo methods are a class of computational algorithms that rely on repeated random sampling to compute their results. It is all about how to draw random samples from an expected distribution Is the population we have available similar to the truth? 44
  • 45. Inverse Transformation Method Gaussian distribution Probability Density Function Uniformrandom numbergenerator § Cumulative Distribution Function Gaussian distribution F(x) = Z x inf f(t)dt x = F 1 (u)
  • 46. Hit-or-miss Method Problem is not always simple to calculate •  choose x (equally distributed) in interval where •  choose y (equally distributed) In interval •  Return x when else don’t return a value 46 x = F 1 (u) f(x) 6= 0 [min(f(x)), max(f(x))] y < f(x)
  • 48. Summary: Monte Carlo Methods Monte Carlo methods are a class of computational algorithms that rely on repeated random sampling to compute their results. It is all about how to draw random samples from an expected distribution Is the population we have available similar to the truth? 48
  • 49. 49
  • 50. Acceptance/Rejection Methode Combination of Hit and Miss and Inverse transform method In the rejection sampling method, samples are drawn from a simple distribution q(z) and rejected if they fall in the grey area between the unnormalized distribution p(z) and the scaled distribution kq(z). The resulting samples are distributed according to p(z), which is the normalized version of p(z). First, we generate a number z0 from the distribution q(z). Next, we generate a number u0 from the uniform distribution over [0, kq(z0)]. This pair of random numbers has uniform distribution under the curve of the function kq(z). Finally, if u0 > p(z0) then the sample is rejected, otherwise u0 is retained. T 50
  • 51. 51 Law of Large Numbers converges to the expected value Weak law: For any nonzero margin ε specified, with a sufficiently large sample there will be a very high probability that the average of the observations will be close to the expected value, that is, within the margin. Strong law: that the sample average converges almost surely to the expected value[ Xn = 1 n (X1 + · · · + Xn) Xn ! µ for n ! 1 lim n!1 Pr |Xn µ| > " = 0 Pr ⇣ lim n!1 Xn = µ ⌘ = 1
  • 53. Big Picture 53 Statistics Frequentist Uses frequent measurements of a data set or experiment. The trick is the sampling to extract the desired information: Time Sampling: à e.g. Nyquist Theorem Space Sampling: à e.g. Integral, Monte Carlo Function Sampling: à e.g. Wavelets, Fourier Bayesian Theory Takes into account all available information and answers the question of interest given the particular data set Maximum Noise Suppression à Wiener Filter Minimum Variance Estimator: à Kalman Filter (PLL)