5. Image Formation
• Geometry - determines where in the image
plane the projection of a point in a scene
will be located
• Physics of Light - determines the brightness
of a point in the image plane as a function
of scene illumination and surface properties
7. Image Formation
• The LINE OF SIGHT of a point in the
scene is the line that passes through the
point of interest and the centre of projection
• The above model leads to image inversion,
to avoid this, assume the image plane is in
front of the centre of projection
9. Perspective Projection
• (x’,y’) may be found by computing the co-
ordinates of the intersection of the line of
sight, passing thru’ (x,y,z) and the image
plane
• By two sets of similar triangles :
x’=fx/z and y’=fy/z
10. Image Irradiance (Brightness)
• The irradiance of a point in the image plane E(x’,y’) is
determined by the amt of energy radiated by the
corresponding scene in the direction of the image point :
E(x’,y’)=L(x,y,z)
• Two factors determine radiance emitted by a surface patch
I) Illumination falling on scene patch
- determined by the patch’s position relative to the distribution of light
sources
II) Fraction of incident illumination reflected by patch
- determined by optical properties of the patch
11. Image Irradiance
• (θiφi) is the direction of
the point source of scene
illumination
• (θeφe) is the direction of
the energy emitted from
the surface patch
• E(θiφi) is the energy
arriving at a patch
• L(θeφe) is the energy
radiated from the patch
12. Image Irradiance
• The relationship between radiance and
irradiance may be defined as follows :
L(θeφe) = f(θiφiθeφe) E(θiφi)
where f(θiφiθeφe) is the bidirectional
reflectance distribution function (BRDF)
• BRDF - depends on optical properties of the
surface
13. Types of Reflectance
• Lambertian Reflectance
• Specular Reflectance
• Hybrid Reflectance
• Electron Microscopy Reflectance (not covered)
14. Lambertian Reflectance
• Appears equally bright from all viewing
directions for a fixed illumination
distribution
• Does not absorb any incident illumination
• BRDF is a constant (1/π)
15. Lambertian Reflectance -
Point Source
• Perceived brightness illuminated by a
distant point source
L(θeφe) = Ι0/π Cos θs -- Lambert Cosine Rule
• this means, a surface patch captures the
most illumination if it is orientated so that
the surface normal of the patch points in the
direction of illumination
16. Lambertian Reflectance -
Uniform Source
• Perceived brightness illuminated by a
uniform source
L(θeφe) = Ι0
• this means, no matter how a surface is
illuminated, it receives the same amount of
illumination
17. Specular Reflectance
• Reflects all incident illumination in a
direction that has the same angle with
respect to the surface normal, but on the
oppside of the surface normal
• light in the direction (θiφi) is reflected to
(θeφe) = (θiφi+π)
• BRDF is δ(θe-θi)δ (φe-φi-π) / Sin θi Cos θi
18. Specular Reflectance
• Perceived brightness is
L(θeφe) = Ι0(θeφe−π)
• this means, the incoming rays of light are
reflected from the surface like a perfect
mirrior
19. Hybrid Reflectance
• Mixture of Lambertian and Specular
reflectance
• BRDF is η/π + (1−η)
∗ δ(θe -θi)δ (φe-φi-π) / Sin θi Cos θi
• where η isthe mixture of the two
reflectance functions
20. Surface Orientation
• If (x,y,z) is a point on a surface and (x,y) is
the same point on the image plane, with
distance z from the camera (depth), then a
nearby point is
(x+δx, y+δy)
• the change in depth can be expressed as
δz = (ϑz/ϑx)δx + (ϑz/ϑy)δy
21. Surface Orientation
• The size of the partial derivaties of z with
respect to x and y are related to the
orientation of the surface patch.
• The gradient of (x,y,z) is the vector (p,q)
which is given by
p = (ϑz/ϑx), q = (ϑz/ϑy)
22. Reflectance Map
• For a given light source distribution and a
given surface material, the reflectance of all
surface orientations of p and q can be
catalogued or computed to yield the
reflectance map R(p,q) which leads to the
image irradiance equation
E(x,y) = R(p,q)
23. Reflectance Map
• i.e., that the irradiance at a point in the
image plane is equal to the reflectance map
value for surface orientation p and q in the
corresponding point in a scene
• in other words, given a change in surface
orientation, the reflectance map allows you
to calculate a change in image intensity.
24. Shape from Shading
• the oppside problem, we know E(x,y) =
R(p,q), so we need to calculate p and q for
each point (x,y) in the image
• Two unknows, one equation, therefore, a
constraint must be applied.
25. Shape from Shading
• Smoothness constraint
• Objects are made of a smooth surface,
which depart from smoothness only along
their edges
• may be expressed as
∫∫ +++= ydxdqqppe yxyxs ))()(( 2222
27. Photometric Stereo
• Asssume a scene with Lambertian
reflectance
• Each point (x,y) will have brightness E(x,y)
and possible orientations p and q for a given
light source
• if the same surface is illuminated by a point
source in a different location, the
reflectance map will the different
28. Photometric Stereo
• Using this method, surface orientation may
be uniquely identified
• In reality, not all incident light is radiated
from a surface, this is accounted for by
adding an albedo factor (ρ) into the image
irradiance eqn.
• E(x,y) = ρR(p,q)
30. Determining Surface Orientations of
Specular Surfaces by Using the
Photometric Stereo Method
Katsusi Ikeuchi
Ministry of International Trade in Industry, Japan
31. Introduction
• Photometric stereo may be used to
determine the surface orientation of a patch
• for diffuse surfaces, point source
illumination is used
• for specular surfaces, a distributed light
source is required
32. Image Radiance
• For a specular surface and an extended light
source :
Le(θeφe) = Li(θeφe+π)
• Relationship between reflected radiance and
image irradiance
• Ep = {(π/4)(d/fp)2
Cos4
α}Le
fp =focal length
d = diameter of aperture
α = off-axis angle
33. Image Radiance
• from this a brightness distribution may be
derived
• and from that an inverse transformation
35. Off-Line Job
• Light Source : Three linear lamps, placed
symmetrically 120 degrees apart
• Lookup Table : Could use 3D table, but observed
triples often contain errors
• Instead use 2D lookup Table - each element has
two alternatives
• Each alternative consists of a surface orientation
and an instensity
37. On-Line Job
• Normalization is required to cancel the
effect of albedo
• Brightness calibration is required also
• The correct alternative of the two solutions
is found by comparing the distance between
the actual third image brightness and the
element of the matrix
38. Results
• Works well in a contrainted environment
• has problems if the surface is not smooth
39. Extracting the Shape and Roughness of
Specular Lobe Objects Using Four Light
Photometric Stereo
Fredric Solomon
Katsushi Ikeuchi
Carnegie Mellon
42. Introduction
• Structured Highlight approach yields 3D
images from point sources and images
• ‘Highlight’ - light source reflected on a
specular surface
43. Introduction
• Angle of Incidence = Angle of Reflection
• A fixed camera will image a reflected light ray
(highlight) only if it is positioned and
orientationed correctly
44. Introduction
• Once a highlight is observed, if the direction of the
incident ray is known, the orientation of the surface
element may be found
• A spherical array of fixed point light sources is used to
ensure all positions and directions are scanned
45. Lambertian Reflectance
• The reflectance relationship for a
Lambertian model of image E(x,y)
E(x,y) = A (n . s)
n = surface normal (unit vector)
s = source direction (unit vector)
A = constant related to illumination intensity and
surface albedo
46. Hybrid Reflectance
• The reflectance relationship for a hybrid
model of image E(x,y)
E(x,y) = A k (n . s) + (a/2)(1-k) .
[2(n . z)(n . s)-(z . s)]
z = viewing direction (unit vector)
k = relative weight of specular and Lambertian
components
n = sharpness of the specularity
47. Structured Hightlight Inspection
• Using the above equation, the slope of any point may be
calculated
• Surface orientation may be determined by the sources that
produce local peaks in the reflectance map.
49. Perspective Camera Model
• All reflected rays pass though a focal point
• this model provides very accurate
measurements, but requires extensive
calibration procedures
50. Orthographic Projection Model
• the focal point is assumed to be an infinite
distance from the camera and all the
reflected rays are perpendicular to the
image plane
51. “Fixed” Camera Model
• all rays are emitted from a single point on
the reflectance plane and all surface normal
estimates are computed to that reference
point
52. Camera Models - Accuracy
• Perspective Camera Model
– Most accurate
• “Fixed” Camera Model
– Next most accurate
• Orthographic Projection Model
– Most sensitive to error
53. SHINY - Structured Highlight
INspection sYstem
• Highlightrs are extracted from images and
tablulated
• Surface normals are computed based on
lookup tables dervied from calibration
experiments
• Reconstruction is done using interpolation
followed by smoothing
54. Stereo Hightlight Algorithm
• The assumption of a distant source to
uniquely identify the angle of incidence of
illumination is an approximation
• To improve this, a second camera is used
with stereo matching for greater accuracy
55. Results
• With two cameras need to resolve stereo
matching ambiguities, therefore, need
further constraints
• This technique is slow (1988)
57. Stereo in the Presence of
Specular Reflection
Dinkar N. Bhat
Shree K. Nayar
Columbia University
58. Introduction
• Stereo is a direct method of obtaining the
3D structure of the visual world
• But, it suffers from the fact that the
correspondence problem is inherently
underconstrained
59. Correspondence Problem
• the most common
constraint is that
intensities of
corresponding points in
images are identical
• The assumption is not
valid for specular surfaces
(since intensity is
dependant on viewing
direction)
60. Specular Reflection
• When a specular surface is smooth, the
distribution of the specular intensity is
concentrated
• As the surface becomes rougher, the peak
volume of the specular intensity decreases
and the distribution widens
62. Implications for Stereo
• The total image intensity of any point is the sum
of the diffuse and specular intensity conponents
• Since the change in diffuse components is very
small relative to the changes in specular
components, it follows that the overall change in
intensity is approximately equal to the specular
intensity differences
Idiff ~= | Is1 - Is2|
63. Implications for Stereo
• This approximation will assist in
determining an optimal binocular stereo
configuration, which minimises specular
correspondence problems but maximises
precision in depth estimation
65. Vergence
• When cameras are orientated such that their
optical axes intersect at a point in space,
this point is refered to as the point vergence
• Depth accuracy is directly proportional to
vergence (…which conflicts with the
requirement to minimize intensity
differences)
66. Binocular Stereo
• Determining the maximum acceptable
vergence can be formulated as a constrained
optimization problem
fobj = v1 . v2
c1: Idiff < a specified threshold
c2: the cameras lie in the X-Z plane
67. Experiments
• Two uniformly rough cylindrical objects
wrapped, one is gift wrapper and the other
in xerox paper
• Similar patterns were marking on both
68.
69. Trinocular Stereo
• Required in environments which are less
structured and where surface roughness
cannot be estimated
• Allows intensity difference at a point to be
constrained to a threshold in at least one of
the stereo pairs
73. Introduction
• This research extends a diffuse mutli-image
shape-from-shading technique to perform in
the specular domain
74. Viewing Geometry
• Assumes an ideal camera with focal length f
viewing a surface
• The camera focal point is located at P and O
is a point on the surface
• From Snell’s Law an equation can be
derived relating the objects position in
space to its image on the image plane
76. Image Synthesis
• the specular surface stereo method requires a
model that predicts accurately the irradiance at
each pixel
• Use Idealized Image Synthesis Model
• this will allow us to determine that the irradiance
is directly proportional to the product of the
radiance and the reflection co-efficient
77. Specular Surface Stereo
• Starting at a known evelation, an iterative
process is used to determine shape
• Two-step process, determine orientation
and propagation
78. Surface Orientation
• Identify the pixels that view the surface
point (by calculating an inverse of a
projective transform)
• A value of (p,q) is found such that the
predicted irradiance at E(p,q) match the
observed values
79. Surface Propagation
• if a point is known on a surface, it is
possible to recover shape by propagation
• If (x,y) has elevation h and gradient (p,q)
then (x+δx, y+δy) has elevation
h’ = h +pδx +qδy
80. Obtaining Seed Values
• if there are surface features with diffuse
proprties (e.g. scratchs or rough spots), use
feature matching methods
• if surface is smooth, use a laser range finder
81. Results
• Tests were done on four _simulated_
images to determine the feasibilty of the
method, the results were 99% accurate
• Using this method in the ‘real world’ would
require more constraints
83. A Theory of Specular Surface
Geometry
Michael Oren
Shree K. Nayar
Columbia University
84. Introduction
• Develops a 2D profile recovery technique
and generalize to 3D surface recovery
• Two major issues associated with
specular surfaces
– detection
– shape recovery
85. Introduction
• Specular surfaces introduce a new kind of
image feature, a virtual feature
• A virtual feature is the reflection by a
specular surface of another scene point
which travels over the surface when the
observer moves.
86. Curve Representation
• Cartesian co-ordinates result in complex
equations describing specular motion
• Using the Legendre transform to represent
the curve as an envelope of tangents
88. 2D Caustics
• When a camera moves around an object the virtual
features move on the specular surface, producing a
family of reflected rays (the envelope defined by
this family is called the caustic)
• On the other hand, the caustic of a real feature is
one single point (the actual position of the feature
in the scene where all the reflected rays intersect)
90. 2D Caustics
• Using this, feature
classification is simply a
matter of computing a
caustic and determining
whether it is a point or a
curve
• Features are tracked from
one frame to the next
using a sum of square
difference (SSD)
correlation operator
91. 2D Profile Recovery
• The camera is moved in the plane of the
profile and the features are tracked
• An equation may be derived relating the
caustic to the surface profile, allowing the
recovery of the 2D profile from the image.
92. 3D Surface Recovery
• The 3D camera motion problem will result
in an arbitrary space curve rather than a
family of curves as in the 2D case
• The 3D problem cannot but reduced to a
finite number of 2D profile problems
93. 3D Surface Recovery
• The concept behind the derevation of the 3D
caustic curve is to decompose the caustic point
position at any given instant into two orthogonal
components
• As the camera moves along the specular object, a
virtual feature travels along the 3D profile on the
objects surface.
• It is possible to develop an equation which relates
the trajectory of the virtual feature to the surface
profile
94. Results
• The 2D testing involved tracking two
features on two different specular surfaces,
in both experiments the profile was
accurately estimated
• The 3D testing involved tracking a
highlight on a specular surface, the
recovered curve is in strong agreement with
the actual surface
96. Epipolar Geometry
• two cameras are
displaced from each
other by a baseline
distance
• Object point X forms
two distinct image
points x and x’
97. Epipolar Geometry
• Assume images formed in front of camera
to avoid inversion problem
• point (x’, y’) in the images plane from a real
point (x, y, z) may be calculated as
x’ = fx/z and y’ = fy/z
• the displacement between the locations of
image point is called the disparity
98. Epipolar Geometry
• the plane passing through
the two camera centres
and the object point is
called the epipolar plane
• the intersection of the
image plane and the
epipolar plane is called the
epipolar line
100. Introduction
• The technique of Epipolar-Plane Image
Analysis involves obtaining depth estimates
for a point by taking a large number of
images
• This gives a large baseline and higher
accuracy
• It also minimises the correspondence
problem
101. Epipolar-Plane Image Analysis
• this technique imposes the following
constraints
– the camera is moving along a linear path
– it acquires images at equal spacing as it is
moved
– the camera’s view is orthogonal to the direction
of travel
102. Epipolar-Plane Image Analysis
• the traditional notion of epipolar lines is
generalized to an epipolar plane
• using this, plus the fact that the camera is
always moving along a linear path and we
may conclude the a given scence feature
will always be restricted to a given epipolar
plane
104. The Spatiotemporal Surface
• As images are collected, they are stacked up
into a spatiotemporal surface
• as each new image is obtained its spatial
and temporal edge contours sre constructed
• using a 3D Laplacian of a 3D Gaussian
106. 3D Surface Estimation and Model
Construction From Specular Motion
in Image Sequences
Jiang Yu Zheng
Norihiro Abe
Kyushu Institiute of Technology
Yoshihiro Fukagawa
Torey Corporation
107. Introduction
• This technique reconstructs 3D models of
complex objects with specular surfaces
• The process involves rotating the object
under inspection
109. Projected Highlights
• An extended light source project highlight
stripes onto the object
• The stripes gradually shift across the object
surface and pass most point once
• The specular motion is captured in epipolar-
plane images
110. Feature tracking
• We know how to detect corners and edge of
surface patterns
• The motion type of highlights in EPI can be used
to determine five categories of shape
– convex corner
– convex
– planer
– concave
– concave corner
111. EPI-Plane Images
• During the rotation, highlights will split and
merge, appear and dissapear, etc.
114. Visual Inspection System for the
Classification of Solder Joints
Tae-Hyeon Kim
Young Shik Moon
Sung Han Park
Hanyang University
Kwang-Jin Yoon
LG Industrial Systems
115. Introduction
• Uses three layers of ring shaped LED
arrays, with different illumination angles
• Solder Joint are segemented and classified
using either their 2D features or their 3D
features
117. Preprocessing
• Objective is to identify
and segement the
soldered regions
• Solder is isolated both
vertically and
horozontally
118. Feature Extraction - 2D
• Average gray level value of I1 and I3
X1 = 1/N * Σ IK(x,y)
• Percentage of highlights of I1 and I2
X2 = 1/N * Σ U(x,y) * 100
U(x,y) = thresholded image of I1
119. Feature Extraction - 3D
• Shape recovery is done using a hybrid reflectance
model for all samples not in the confidence
interval
• A reflectance map is built up representing
intensity values as a function of orientation for
each illumination angle
• For each point, three intensity values are
recovered and from these and the reflectance map,
the orientation is estimated
120. Classification -2D
• Uses 3-Layer backpropagation neural
network
• Four input nodes for four features
• Five hidden layer nodes
• Four output nodes for four solder types