SlideShare a Scribd company logo
1 of 21
Computer vision:
models, learning and inference
            Chapter 3
      Probability distributions


      Please send errata to s.prince@cs.ucl.ac.uk
Computer vision: models, learning and inference. ©2011 Simon J.D. Prince   2
Why model these complicated quantities?
Because we need probability distributions over model parameters as well as
over data and world state. Hence, some of the distributions describe the
parameters of the others:




              Computer vision: models, learning and inference. ©2011 Simon J.D. Prince   3
Why model these complicated quantities?
 Because we need probability distributions over model parameters as well as
 over data and world state. Hence, some of the distributions describe the
 parameters of the others:

 Example:




Parameters modelled by:




                                                                                 Models variance
                            Models mean
                 Computer vision: models, learning and inference. ©2011 Simon J.D. Prince          4
Bernoulli Distribution

                                                              or




                                                            For short we write:




Bernoulli distribution describes situation where only two possible
outcomes y=0/y=1 or failure/success

Takes a single parameter
              Computer vision: models, learning and inference. ©2011 Simon J.D. Prince   5
Beta Distribution
Defined over data                                 (i.e. parameter of Bernoulli)




•   Two parameters     both > 0                                             For short we write:
•   Mean depends on relative values E[ ] =                      .
•   Concentration depends on magnitude
                 Computer vision: models, learning and inference. ©2011 Simon J.D. Prince         6
Categorical Distribution

                                                         or can think of data as vector with all
                                                         elements zero except kth e.g. [0,0,0,1 0]




                                                         For short we write:



Categorical distribution describes situation where K possible
outcomes y=1… y=k.
Takes a K parameters                  where

              Computer vision: models, learning and inference. ©2011 Simon J.D. Prince           7
Dirichlet Distribution
Defined over K values                                     where




 Or for short:                                                                              Has k
                                                                                            parameters       k>0




                 Computer vision: models, learning and inference. ©2011 Simon J.D. Prince                8
Univariate Normal Distribution


For short we write:




                                                       Univariate normal distribution
                                                       describes single continuous
                                                       variable.

                                                       Takes 2 parameters              and   2>0

            Computer vision: models, learning and inference. ©2011 Simon J.D. Prince          9
Normal Inverse Gamma Distribution
Defined on 2 variables                and        2>0




or for short

 Four parameters                           and




               Computer vision: models, learning and inference. ©2011 Simon J.D. Prince   10
Multivariate Normal Distribution


For short we write:




Multivariate normal distribution describes multiple continuous
variables. Takes 2 parameters
    •    a vector containing mean position,
    •    a symmetric “positive definite” covariance matrix

                                          Positive definite:                      is positive for any real
                Computer vision: models, learning and inference. ©2011 Simon J.D. Prince                     11
Types of covariance
Covariance matrix has three forms, termed spherical, diagonal and full




                 Computer vision: models, learning and inference. ©2011 Simon J.D. Prince   12
Normal Inverse Wishart
Defined on two variables: a mean vector                and a symmetric positive definite
matrix, .




or for short:




Has four parameters

    •   a positive scalar,
    •   a positive definite matrix
    •   a positive scalar,
    •   a vector
                Computer vision: models, learning and inference. ©2011 Simon J.D. Prince   13
Samples from Normal
  Inverse Wishart




     Computer vision: models, learning and inference. ©2011 Simon J.D. Prince   14
Conjugate Distributions
The pairs of distributions discussed have a special
  relationship: they are conjugate distributions

• Beta is conjugate to Bernouilli
• Dirichlet is conjugate to categorical
• Normal inverse gamma is conjugate to univariate
  normal
• Normal inverse Wishart is conjugate to
  multivariate normal


          Computer vision: models, learning and inference. ©2011 Simon J.D. Prince   15
Conjugate Distributions
When we take product of distribution and it’s conjugate, the
 result has the same form as the conjugate.

For example, consider the case where




then



                                  a constant                                A new Beta distribution
            Computer vision: models, learning and inference. ©2011 Simon J.D. Prince                  16
Example proof
When we take product of distribution and it’s conjugate, the
 result has the same form as the conjugate.




            Computer vision: models, learning and inference. ©2011 Simon J.D. Prince   17
Bayes’ Rule Terminology
     Likelihood – propensity for                                 Prior – what we know
     observing a certain value of                                about y before seeing x
     x given a certain value of y




Posterior – what we                                      Evidence – a constant to
know about y after                                       ensure that the left hand
seeing x                                                 side is a valid distribution
             Computer vision: models, learning and inference. ©2011 Simon J.D. Prince      18
Importance of the Conjugate Relation 1
                                                                               1. Choose prior that
• Learning parameters:                                                            is conjugate to
                                                                                  likelihood




2. Implies that posterior                               3. Posterior must be a distribution
must have same form as                                  which implies that evidence must equal
conjugate prior distribution                            constant from conjugate relation

               Computer vision: models, learning and inference. ©2011 Simon J.D. Prince               19
Importance of the Conjugate Relation 2

• Marginalizing over parameters




2. Integral becomes easy --the product becomes a                              1. Chosen so conjugate
constant times a distribution                                                 to other term

Integral of constant times probability distribution
= constant times integral of probability distribution
= constant x 1 = constant
                 Computer vision: models, learning and inference. ©2011 Simon J.D. Prince              20
Conclusions
• Presented four distributions which model useful quantities

• Presented four other distributions which model the
  parameters of the first four

• They are paired in a special way – the second set is
  conjugate to the other

• In the following material we’ll see that this relationship is
  very useful



          Computer vision: models, learning and inference. ©2011 Simon J.D. Prince   21

More Related Content

Viewers also liked (15)

Budget Cuts And Their Effects
Budget Cuts And Their EffectsBudget Cuts And Their Effects
Budget Cuts And Their Effects
 
Employment support for long term incapacity benefit claimants
Employment support for long term incapacity benefit claimantsEmployment support for long term incapacity benefit claimants
Employment support for long term incapacity benefit claimants
 
Octink construction product guide 2013
Octink construction product guide 2013Octink construction product guide 2013
Octink construction product guide 2013
 
Engaging the older Participant
Engaging the older ParticipantEngaging the older Participant
Engaging the older Participant
 
power bhueno
power bhuenopower bhueno
power bhueno
 
I want to visit Austrialia
I want to visit AustrialiaI want to visit Austrialia
I want to visit Austrialia
 
Dr. Bart Cammaerts - The Mediation of Dissensus
Dr. Bart Cammaerts - The Mediation of DissensusDr. Bart Cammaerts - The Mediation of Dissensus
Dr. Bart Cammaerts - The Mediation of Dissensus
 
The State of Software Quality
The State of Software QualityThe State of Software Quality
The State of Software Quality
 
Η Σπάρτη
Η ΣπάρτηΗ Σπάρτη
Η Σπάρτη
 
Bni loan qualification_101
Bni loan qualification_101Bni loan qualification_101
Bni loan qualification_101
 
PlayStation 4
PlayStation 4PlayStation 4
PlayStation 4
 
Chistesvarios8
Chistesvarios8Chistesvarios8
Chistesvarios8
 
Usabilidad
UsabilidadUsabilidad
Usabilidad
 
Xstrata Article
Xstrata ArticleXstrata Article
Xstrata Article
 
Notas + Meritos Ordenados Aux de Enfermeria
Notas + Meritos Ordenados Aux de EnfermeriaNotas + Meritos Ordenados Aux de Enfermeria
Notas + Meritos Ordenados Aux de Enfermeria
 

Similar to 03 cv mil_probability_distributions

02 cv mil_intro_to_probability
02 cv mil_intro_to_probability02 cv mil_intro_to_probability
02 cv mil_intro_to_probability
zukun
 
10 cv mil_graphical_models
10 cv mil_graphical_models10 cv mil_graphical_models
10 cv mil_graphical_models
zukun
 
15 cv mil_models_for_transformations
15 cv mil_models_for_transformations15 cv mil_models_for_transformations
15 cv mil_models_for_transformations
zukun
 
09 cv mil_classification
09 cv mil_classification09 cv mil_classification
09 cv mil_classification
zukun
 
11 cv mil_models_for_chains_and_trees
11 cv mil_models_for_chains_and_trees11 cv mil_models_for_chains_and_trees
11 cv mil_models_for_chains_and_trees
zukun
 
17 cv mil_models_for_shape
17 cv mil_models_for_shape17 cv mil_models_for_shape
17 cv mil_models_for_shape
zukun
 
07 cv mil_modeling_complex_densities
07 cv mil_modeling_complex_densities07 cv mil_modeling_complex_densities
07 cv mil_modeling_complex_densities
zukun
 
08 cv mil_regression
08 cv mil_regression08 cv mil_regression
08 cv mil_regression
zukun
 
13 cv mil_preprocessing
13 cv mil_preprocessing13 cv mil_preprocessing
13 cv mil_preprocessing
zukun
 
16 cv mil_multiple_cameras
16 cv mil_multiple_cameras16 cv mil_multiple_cameras
16 cv mil_multiple_cameras
zukun
 
06 cv mil_learning_and_inference
06 cv mil_learning_and_inference06 cv mil_learning_and_inference
06 cv mil_learning_and_inference
zukun
 
20 cv mil_models_for_words
20 cv mil_models_for_words20 cv mil_models_for_words
20 cv mil_models_for_words
zukun
 
14 cv mil_the_pinhole_camera
14 cv mil_the_pinhole_camera14 cv mil_the_pinhole_camera
14 cv mil_the_pinhole_camera
zukun
 
18 cv mil_style_and_identity
18 cv mil_style_and_identity18 cv mil_style_and_identity
18 cv mil_style_and_identity
zukun
 
19 cv mil_temporal_models
19 cv mil_temporal_models19 cv mil_temporal_models
19 cv mil_temporal_models
zukun
 

Similar to 03 cv mil_probability_distributions (19)

02 cv mil_intro_to_probability
02 cv mil_intro_to_probability02 cv mil_intro_to_probability
02 cv mil_intro_to_probability
 
10 cv mil_graphical_models
10 cv mil_graphical_models10 cv mil_graphical_models
10 cv mil_graphical_models
 
15 cv mil_models_for_transformations
15 cv mil_models_for_transformations15 cv mil_models_for_transformations
15 cv mil_models_for_transformations
 
09 cv mil_classification
09 cv mil_classification09 cv mil_classification
09 cv mil_classification
 
11 cv mil_models_for_chains_and_trees
11 cv mil_models_for_chains_and_trees11 cv mil_models_for_chains_and_trees
11 cv mil_models_for_chains_and_trees
 
17 cv mil_models_for_shape
17 cv mil_models_for_shape17 cv mil_models_for_shape
17 cv mil_models_for_shape
 
07 cv mil_modeling_complex_densities
07 cv mil_modeling_complex_densities07 cv mil_modeling_complex_densities
07 cv mil_modeling_complex_densities
 
08 cv mil_regression
08 cv mil_regression08 cv mil_regression
08 cv mil_regression
 
13 cv mil_preprocessing
13 cv mil_preprocessing13 cv mil_preprocessing
13 cv mil_preprocessing
 
16 cv mil_multiple_cameras
16 cv mil_multiple_cameras16 cv mil_multiple_cameras
16 cv mil_multiple_cameras
 
06 cv mil_learning_and_inference
06 cv mil_learning_and_inference06 cv mil_learning_and_inference
06 cv mil_learning_and_inference
 
20 cv mil_models_for_words
20 cv mil_models_for_words20 cv mil_models_for_words
20 cv mil_models_for_words
 
14 cv mil_the_pinhole_camera
14 cv mil_the_pinhole_camera14 cv mil_the_pinhole_camera
14 cv mil_the_pinhole_camera
 
18 cv mil_style_and_identity
18 cv mil_style_and_identity18 cv mil_style_and_identity
18 cv mil_style_and_identity
 
19 cv mil_temporal_models
19 cv mil_temporal_models19 cv mil_temporal_models
19 cv mil_temporal_models
 
05 The Normal Distribution ノート
05 The Normal Distribution ノート05 The Normal Distribution ノート
05 The Normal Distribution ノート
 
Graphical Models for chains, trees and grids
Graphical Models for chains, trees and gridsGraphical Models for chains, trees and grids
Graphical Models for chains, trees and grids
 
02 Introduction to probability ノート
02 Introduction to probability ノート02 Introduction to probability ノート
02 Introduction to probability ノート
 
Representation Learning & Generative Modeling with Variational Autoencoder(VA...
Representation Learning & Generative Modeling with Variational Autoencoder(VA...Representation Learning & Generative Modeling with Variational Autoencoder(VA...
Representation Learning & Generative Modeling with Variational Autoencoder(VA...
 

More from zukun

My lyn tutorial 2009
My lyn tutorial 2009My lyn tutorial 2009
My lyn tutorial 2009
zukun
 
ETHZ CV2012: Tutorial openCV
ETHZ CV2012: Tutorial openCVETHZ CV2012: Tutorial openCV
ETHZ CV2012: Tutorial openCV
zukun
 
ETHZ CV2012: Information
ETHZ CV2012: InformationETHZ CV2012: Information
ETHZ CV2012: Information
zukun
 
Siwei lyu: natural image statistics
Siwei lyu: natural image statisticsSiwei lyu: natural image statistics
Siwei lyu: natural image statistics
zukun
 
Lecture9 camera calibration
Lecture9 camera calibrationLecture9 camera calibration
Lecture9 camera calibration
zukun
 
Brunelli 2008: template matching techniques in computer vision
Brunelli 2008: template matching techniques in computer visionBrunelli 2008: template matching techniques in computer vision
Brunelli 2008: template matching techniques in computer vision
zukun
 
Modern features-part-4-evaluation
Modern features-part-4-evaluationModern features-part-4-evaluation
Modern features-part-4-evaluation
zukun
 
Modern features-part-3-software
Modern features-part-3-softwareModern features-part-3-software
Modern features-part-3-software
zukun
 
Modern features-part-2-descriptors
Modern features-part-2-descriptorsModern features-part-2-descriptors
Modern features-part-2-descriptors
zukun
 
Modern features-part-1-detectors
Modern features-part-1-detectorsModern features-part-1-detectors
Modern features-part-1-detectors
zukun
 
Modern features-part-0-intro
Modern features-part-0-introModern features-part-0-intro
Modern features-part-0-intro
zukun
 
Lecture 02 internet video search
Lecture 02 internet video searchLecture 02 internet video search
Lecture 02 internet video search
zukun
 
Lecture 01 internet video search
Lecture 01 internet video searchLecture 01 internet video search
Lecture 01 internet video search
zukun
 
Lecture 03 internet video search
Lecture 03 internet video searchLecture 03 internet video search
Lecture 03 internet video search
zukun
 
Icml2012 tutorial representation_learning
Icml2012 tutorial representation_learningIcml2012 tutorial representation_learning
Icml2012 tutorial representation_learning
zukun
 
Advances in discrete energy minimisation for computer vision
Advances in discrete energy minimisation for computer visionAdvances in discrete energy minimisation for computer vision
Advances in discrete energy minimisation for computer vision
zukun
 
Gephi tutorial: quick start
Gephi tutorial: quick startGephi tutorial: quick start
Gephi tutorial: quick start
zukun
 
EM algorithm and its application in probabilistic latent semantic analysis
EM algorithm and its application in probabilistic latent semantic analysisEM algorithm and its application in probabilistic latent semantic analysis
EM algorithm and its application in probabilistic latent semantic analysis
zukun
 
Object recognition with pictorial structures
Object recognition with pictorial structuresObject recognition with pictorial structures
Object recognition with pictorial structures
zukun
 
Iccv2011 learning spatiotemporal graphs of human activities
Iccv2011 learning spatiotemporal graphs of human activities Iccv2011 learning spatiotemporal graphs of human activities
Iccv2011 learning spatiotemporal graphs of human activities
zukun
 

More from zukun (20)

My lyn tutorial 2009
My lyn tutorial 2009My lyn tutorial 2009
My lyn tutorial 2009
 
ETHZ CV2012: Tutorial openCV
ETHZ CV2012: Tutorial openCVETHZ CV2012: Tutorial openCV
ETHZ CV2012: Tutorial openCV
 
ETHZ CV2012: Information
ETHZ CV2012: InformationETHZ CV2012: Information
ETHZ CV2012: Information
 
Siwei lyu: natural image statistics
Siwei lyu: natural image statisticsSiwei lyu: natural image statistics
Siwei lyu: natural image statistics
 
Lecture9 camera calibration
Lecture9 camera calibrationLecture9 camera calibration
Lecture9 camera calibration
 
Brunelli 2008: template matching techniques in computer vision
Brunelli 2008: template matching techniques in computer visionBrunelli 2008: template matching techniques in computer vision
Brunelli 2008: template matching techniques in computer vision
 
Modern features-part-4-evaluation
Modern features-part-4-evaluationModern features-part-4-evaluation
Modern features-part-4-evaluation
 
Modern features-part-3-software
Modern features-part-3-softwareModern features-part-3-software
Modern features-part-3-software
 
Modern features-part-2-descriptors
Modern features-part-2-descriptorsModern features-part-2-descriptors
Modern features-part-2-descriptors
 
Modern features-part-1-detectors
Modern features-part-1-detectorsModern features-part-1-detectors
Modern features-part-1-detectors
 
Modern features-part-0-intro
Modern features-part-0-introModern features-part-0-intro
Modern features-part-0-intro
 
Lecture 02 internet video search
Lecture 02 internet video searchLecture 02 internet video search
Lecture 02 internet video search
 
Lecture 01 internet video search
Lecture 01 internet video searchLecture 01 internet video search
Lecture 01 internet video search
 
Lecture 03 internet video search
Lecture 03 internet video searchLecture 03 internet video search
Lecture 03 internet video search
 
Icml2012 tutorial representation_learning
Icml2012 tutorial representation_learningIcml2012 tutorial representation_learning
Icml2012 tutorial representation_learning
 
Advances in discrete energy minimisation for computer vision
Advances in discrete energy minimisation for computer visionAdvances in discrete energy minimisation for computer vision
Advances in discrete energy minimisation for computer vision
 
Gephi tutorial: quick start
Gephi tutorial: quick startGephi tutorial: quick start
Gephi tutorial: quick start
 
EM algorithm and its application in probabilistic latent semantic analysis
EM algorithm and its application in probabilistic latent semantic analysisEM algorithm and its application in probabilistic latent semantic analysis
EM algorithm and its application in probabilistic latent semantic analysis
 
Object recognition with pictorial structures
Object recognition with pictorial structuresObject recognition with pictorial structures
Object recognition with pictorial structures
 
Iccv2011 learning spatiotemporal graphs of human activities
Iccv2011 learning spatiotemporal graphs of human activities Iccv2011 learning spatiotemporal graphs of human activities
Iccv2011 learning spatiotemporal graphs of human activities
 

Recently uploaded

Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
?#DUbAI#??##{{(☎️+971_581248768%)**%*]'#abortion pills for sale in dubai@
 

Recently uploaded (20)

Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUnderstanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdf
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
HTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation StrategiesHTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation Strategies
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 

03 cv mil_probability_distributions

  • 1. Computer vision: models, learning and inference Chapter 3 Probability distributions Please send errata to s.prince@cs.ucl.ac.uk
  • 2. Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 2
  • 3. Why model these complicated quantities? Because we need probability distributions over model parameters as well as over data and world state. Hence, some of the distributions describe the parameters of the others: Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 3
  • 4. Why model these complicated quantities? Because we need probability distributions over model parameters as well as over data and world state. Hence, some of the distributions describe the parameters of the others: Example: Parameters modelled by: Models variance Models mean Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 4
  • 5. Bernoulli Distribution or For short we write: Bernoulli distribution describes situation where only two possible outcomes y=0/y=1 or failure/success Takes a single parameter Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 5
  • 6. Beta Distribution Defined over data (i.e. parameter of Bernoulli) • Two parameters both > 0 For short we write: • Mean depends on relative values E[ ] = . • Concentration depends on magnitude Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 6
  • 7. Categorical Distribution or can think of data as vector with all elements zero except kth e.g. [0,0,0,1 0] For short we write: Categorical distribution describes situation where K possible outcomes y=1… y=k. Takes a K parameters where Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 7
  • 8. Dirichlet Distribution Defined over K values where Or for short: Has k parameters k>0 Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 8
  • 9. Univariate Normal Distribution For short we write: Univariate normal distribution describes single continuous variable. Takes 2 parameters and 2>0 Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 9
  • 10. Normal Inverse Gamma Distribution Defined on 2 variables and 2>0 or for short Four parameters and Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 10
  • 11. Multivariate Normal Distribution For short we write: Multivariate normal distribution describes multiple continuous variables. Takes 2 parameters • a vector containing mean position, • a symmetric “positive definite” covariance matrix Positive definite: is positive for any real Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 11
  • 12. Types of covariance Covariance matrix has three forms, termed spherical, diagonal and full Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 12
  • 13. Normal Inverse Wishart Defined on two variables: a mean vector and a symmetric positive definite matrix, . or for short: Has four parameters • a positive scalar, • a positive definite matrix • a positive scalar, • a vector Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 13
  • 14. Samples from Normal Inverse Wishart Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 14
  • 15. Conjugate Distributions The pairs of distributions discussed have a special relationship: they are conjugate distributions • Beta is conjugate to Bernouilli • Dirichlet is conjugate to categorical • Normal inverse gamma is conjugate to univariate normal • Normal inverse Wishart is conjugate to multivariate normal Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 15
  • 16. Conjugate Distributions When we take product of distribution and it’s conjugate, the result has the same form as the conjugate. For example, consider the case where then a constant A new Beta distribution Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 16
  • 17. Example proof When we take product of distribution and it’s conjugate, the result has the same form as the conjugate. Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 17
  • 18. Bayes’ Rule Terminology Likelihood – propensity for Prior – what we know observing a certain value of about y before seeing x x given a certain value of y Posterior – what we Evidence – a constant to know about y after ensure that the left hand seeing x side is a valid distribution Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 18
  • 19. Importance of the Conjugate Relation 1 1. Choose prior that • Learning parameters: is conjugate to likelihood 2. Implies that posterior 3. Posterior must be a distribution must have same form as which implies that evidence must equal conjugate prior distribution constant from conjugate relation Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 19
  • 20. Importance of the Conjugate Relation 2 • Marginalizing over parameters 2. Integral becomes easy --the product becomes a 1. Chosen so conjugate constant times a distribution to other term Integral of constant times probability distribution = constant times integral of probability distribution = constant x 1 = constant Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 20
  • 21. Conclusions • Presented four distributions which model useful quantities • Presented four other distributions which model the parameters of the first four • They are paired in a special way – the second set is conjugate to the other • In the following material we’ll see that this relationship is very useful Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 21