SlideShare a Scribd company logo
1 of 124
Use of Specularities and Motion in
the Extraction of Surface Shape
Damian Gordon
Introduction
• Introduction - Image Geometry
• Photometric Stereo (1)
• Structured Highlights (1)
• Stereo Techniques (2)
• Motion Techniques (3)
• Solder Joint Inspection (1)
Specular Surface
• Angle of Incidence = Angle of Reflection
Image Geometry
________________________
Image Formation
• Geometry - determines where in the image
plane the projection of a point in a scene
will be located
• Physics of Light - determines the brightness
of a point in the image plane as a function
of scene illumination and surface properties
Image Formation
Image Formation
• The LINE OF SIGHT of a point in the
scene is the line that passes through the
point of interest and the centre of projection
• The above model leads to image inversion,
to avoid this, assume the image plane is in
front of the centre of projection
Image Formation
Perspective Projection
• (x’,y’) may be found by computing the co-
ordinates of the intersection of the line of
sight, passing thru’ (x,y,z) and the image
plane
• By two sets of similar triangles :
x’=fx/z and y’=fy/z
Image Irradiance (Brightness)
• The irradiance of a point in the image plane E(x’,y’) is
determined by the amt of energy radiated by the
corresponding scene in the direction of the image point :
E(x’,y’)=L(x,y,z)
• Two factors determine radiance emitted by a surface patch
I) Illumination falling on scene patch
- determined by the patch’s position relative to the distribution of light
sources
II) Fraction of incident illumination reflected by patch
- determined by optical properties of the patch
Image Irradiance
• (θiφi) is the direction of
the point source of scene
illumination
• (θeφe) is the direction of
the energy emitted from
the surface patch
• E(θiφi) is the energy
arriving at a patch
• L(θeφe) is the energy
radiated from the patch
Image Irradiance
• The relationship between radiance and
irradiance may be defined as follows :
L(θeφe) = f(θiφiθeφe) E(θiφi)
where f(θiφiθeφe) is the bidirectional
reflectance distribution function (BRDF)
• BRDF - depends on optical properties of the
surface
Types of Reflectance
• Lambertian Reflectance
• Specular Reflectance
• Hybrid Reflectance
• Electron Microscopy Reflectance (not covered)
Lambertian Reflectance
• Appears equally bright from all viewing
directions for a fixed illumination
distribution
• Does not absorb any incident illumination
• BRDF is a constant (1/π)
Lambertian Reflectance -
Point Source
• Perceived brightness illuminated by a
distant point source
L(θeφe) = Ι0/π Cos θs -- Lambert Cosine Rule
• this means, a surface patch captures the
most illumination if it is orientated so that
the surface normal of the patch points in the
direction of illumination
Lambertian Reflectance -
Uniform Source
• Perceived brightness illuminated by a
uniform source
L(θeφe) = Ι0
• this means, no matter how a surface is
illuminated, it receives the same amount of
illumination
Specular Reflectance
• Reflects all incident illumination in a
direction that has the same angle with
respect to the surface normal, but on the
oppside of the surface normal
• light in the direction (θiφi) is reflected to
(θeφe) = (θiφi+π)
• BRDF is δ(θe-θi)δ (φe-φi-π) / Sin θi Cos θi
Specular Reflectance
• Perceived brightness is
L(θeφe) = Ι0(θeφe−π)
• this means, the incoming rays of light are
reflected from the surface like a perfect
mirrior
Hybrid Reflectance
• Mixture of Lambertian and Specular
reflectance
• BRDF is η/π + (1−η)
∗ δ(θe -θi)δ (φe-φi-π) / Sin θi Cos θi
• where η isthe mixture of the two
reflectance functions
Surface Orientation
• If (x,y,z) is a point on a surface and (x,y) is
the same point on the image plane, with
distance z from the camera (depth), then a
nearby point is
(x+δx, y+δy)
• the change in depth can be expressed as
δz = (ϑz/ϑx)δx + (ϑz/ϑy)δy
Surface Orientation
• The size of the partial derivaties of z with
respect to x and y are related to the
orientation of the surface patch.
• The gradient of (x,y,z) is the vector (p,q)
which is given by
p = (ϑz/ϑx), q = (ϑz/ϑy)
Reflectance Map
• For a given light source distribution and a
given surface material, the reflectance of all
surface orientations of p and q can be
catalogued or computed to yield the
reflectance map R(p,q) which leads to the
image irradiance equation
E(x,y) = R(p,q)
Reflectance Map
• i.e., that the irradiance at a point in the
image plane is equal to the reflectance map
value for surface orientation p and q in the
corresponding point in a scene
• in other words, given a change in surface
orientation, the reflectance map allows you
to calculate a change in image intensity.
Shape from Shading
• the oppside problem, we know E(x,y) =
R(p,q), so we need to calculate p and q for
each point (x,y) in the image
• Two unknows, one equation, therefore, a
constraint must be applied.
Shape from Shading
• Smoothness constraint
• Objects are made of a smooth surface,
which depart from smoothness only along
their edges
• may be expressed as
∫∫ +++= ydxdqqppe yxyxs ))()(( 2222
Shape from Shading
Photometric Stereo
• Asssume a scene with Lambertian
reflectance
• Each point (x,y) will have brightness E(x,y)
and possible orientations p and q for a given
light source
• if the same surface is illuminated by a point
source in a different location, the
reflectance map will the different
Photometric Stereo
• Using this method, surface orientation may
be uniquely identified
• In reality, not all incident light is radiated
from a surface, this is accounted for by
adding an albedo factor (ρ) into the image
irradiance eqn.
• E(x,y) = ρR(p,q)
Photometric Stereo
________________________
Determining Surface Orientations of
Specular Surfaces by Using the
Photometric Stereo Method
Katsusi Ikeuchi
Ministry of International Trade in Industry, Japan
Introduction
• Photometric stereo may be used to
determine the surface orientation of a patch
• for diffuse surfaces, point source
illumination is used
• for specular surfaces, a distributed light
source is required
Image Radiance
• For a specular surface and an extended light
source :
Le(θeφe) = Li(θeφe+π)
• Relationship between reflected radiance and
image irradiance
• Ep = {(π/4)(d/fp)2
Cos4
α}Le
fp =focal length
d = diameter of aperture
α = off-axis angle
Image Radiance
• from this a brightness distribution may be
derived
• and from that an inverse transformation
System Implementation
• Two Stage Process
– Off-Line Job
– On-Line Job
Off-Line Job
• Light Source : Three linear lamps, placed
symmetrically 120 degrees apart
• Lookup Table : Could use 3D table, but observed
triples often contain errors
• Instead use 2D lookup Table - each element has
two alternatives
• Each alternative consists of a surface orientation
and an instensity
Off-Line Job
On-Line Job
• Normalization is required to cancel the
effect of albedo
• Brightness calibration is required also
• The correct alternative of the two solutions
is found by comparing the distance between
the actual third image brightness and the
element of the matrix
Results
• Works well in a contrainted environment
• has problems if the surface is not smooth
Extracting the Shape and Roughness of
Specular Lobe Objects Using Four Light
Photometric Stereo
Fredric Solomon
Katsushi Ikeuchi
Carnegie Mellon
Structured Highlights
__________________________
Structured Highlight Inspection of
Specular Surfaces
Arthur C. Sanderson
Lee E. Weiss
Shree K. Nayar
Carnegie Mellon
Introduction
• Structured Highlight approach yields 3D
images from point sources and images
• ‘Highlight’ - light source reflected on a
specular surface
Introduction
• Angle of Incidence = Angle of Reflection
• A fixed camera will image a reflected light ray
(highlight) only if it is positioned and
orientationed correctly
Introduction
• Once a highlight is observed, if the direction of the
incident ray is known, the orientation of the surface
element may be found
• A spherical array of fixed point light sources is used to
ensure all positions and directions are scanned
Lambertian Reflectance
• The reflectance relationship for a
Lambertian model of image E(x,y)
E(x,y) = A (n . s)
n = surface normal (unit vector)
s = source direction (unit vector)
A = constant related to illumination intensity and
surface albedo
Hybrid Reflectance
• The reflectance relationship for a hybrid
model of image E(x,y)
E(x,y) = A k (n . s) + (a/2)(1-k) .
[2(n . z)(n . s)-(z . s)]
z = viewing direction (unit vector)
k = relative weight of specular and Lambertian
components
n = sharpness of the specularity
Structured Hightlight Inspection
• Using the above equation, the slope of any point may be
calculated
• Surface orientation may be determined by the sources that
produce local peaks in the reflectance map.
Camera Models
• Perspective Camera Model
• Orthographic Projection Model
• “Fixed” Camera Model
Perspective Camera Model
• All reflected rays pass though a focal point
• this model provides very accurate
measurements, but requires extensive
calibration procedures
Orthographic Projection Model
• the focal point is assumed to be an infinite
distance from the camera and all the
reflected rays are perpendicular to the
image plane
“Fixed” Camera Model
• all rays are emitted from a single point on
the reflectance plane and all surface normal
estimates are computed to that reference
point
Camera Models - Accuracy
• Perspective Camera Model
– Most accurate
• “Fixed” Camera Model
– Next most accurate
• Orthographic Projection Model
– Most sensitive to error
SHINY - Structured Highlight
INspection sYstem
• Highlightrs are extracted from images and
tablulated
• Surface normals are computed based on
lookup tables dervied from calibration
experiments
• Reconstruction is done using interpolation
followed by smoothing
Stereo Hightlight Algorithm
• The assumption of a distant source to
uniquely identify the angle of incidence of
illumination is an approximation
• To improve this, a second camera is used
with stereo matching for greater accuracy
Results
• With two cameras need to resolve stereo
matching ambiguities, therefore, need
further constraints
• This technique is slow (1988)
Stereo Techniques
________________________
Stereo in the Presence of
Specular Reflection
Dinkar N. Bhat
Shree K. Nayar
Columbia University
Introduction
• Stereo is a direct method of obtaining the
3D structure of the visual world
• But, it suffers from the fact that the
correspondence problem is inherently
underconstrained
Correspondence Problem
• the most common
constraint is that
intensities of
corresponding points in
images are identical
• The assumption is not
valid for specular surfaces
(since intensity is
dependant on viewing
direction)
Specular Reflection
• When a specular surface is smooth, the
distribution of the specular intensity is
concentrated
• As the surface becomes rougher, the peak
volume of the specular intensity decreases
and the distribution widens
Specular Reflection
Smooth Surface Rough Surface
Implications for Stereo
• The total image intensity of any point is the sum
of the diffuse and specular intensity conponents
• Since the change in diffuse components is very
small relative to the changes in specular
components, it follows that the overall change in
intensity is approximately equal to the specular
intensity differences
Idiff ~= | Is1 - Is2|
Implications for Stereo
• This approximation will assist in
determining an optimal binocular stereo
configuration, which minimises specular
correspondence problems but maximises
precision in depth estimation
Binocular Stereo Configuration
Vergence
• When cameras are orientated such that their
optical axes intersect at a point in space,
this point is refered to as the point vergence
• Depth accuracy is directly proportional to
vergence (…which conflicts with the
requirement to minimize intensity
differences)
Binocular Stereo
• Determining the maximum acceptable
vergence can be formulated as a constrained
optimization problem
fobj = v1 . v2
c1: Idiff < a specified threshold
c2: the cameras lie in the X-Z plane
Experiments
• Two uniformly rough cylindrical objects
wrapped, one is gift wrapper and the other
in xerox paper
• Similar patterns were marking on both
Trinocular Stereo
• Required in environments which are less
structured and where surface roughness
cannot be estimated
• Allows intensity difference at a point to be
constrained to a threshold in at least one of
the stereo pairs
Trinocular Stereo
Experiments
• The experiments done indicate that the
reconstruction algorithm works resonably
well in an unconstrained environment
Retrieving Shape Information
from Multiple Images of a
Specular Surface
Howard Schultz
University of Massachusetts
Introduction
• This research extends a diffuse mutli-image
shape-from-shading technique to perform in
the specular domain
Viewing Geometry
• Assumes an ideal camera with focal length f
viewing a surface
• The camera focal point is located at P and O
is a point on the surface
• From Snell’s Law an equation can be
derived relating the objects position in
space to its image on the image plane
Viewing Geometry
Image Synthesis
• the specular surface stereo method requires a
model that predicts accurately the irradiance at
each pixel
• Use Idealized Image Synthesis Model
• this will allow us to determine that the irradiance
is directly proportional to the product of the
radiance and the reflection co-efficient
Specular Surface Stereo
• Starting at a known evelation, an iterative
process is used to determine shape
• Two-step process, determine orientation
and propagation
Surface Orientation
• Identify the pixels that view the surface
point (by calculating an inverse of a
projective transform)
• A value of (p,q) is found such that the
predicted irradiance at E(p,q) match the
observed values
Surface Propagation
• if a point is known on a surface, it is
possible to recover shape by propagation
• If (x,y) has elevation h and gradient (p,q)
then (x+δx, y+δy) has elevation
h’ = h +pδx +qδy
Obtaining Seed Values
• if there are surface features with diffuse
proprties (e.g. scratchs or rough spots), use
feature matching methods
• if surface is smooth, use a laser range finder
Results
• Tests were done on four _simulated_
images to determine the feasibilty of the
method, the results were 99% accurate
• Using this method in the ‘real world’ would
require more constraints
Motion Techniques
________________________
A Theory of Specular Surface
Geometry
Michael Oren
Shree K. Nayar
Columbia University
Introduction
• Develops a 2D profile recovery technique
and generalize to 3D surface recovery
• Two major issues associated with
specular surfaces
– detection
– shape recovery
Introduction
• Specular surfaces introduce a new kind of
image feature, a virtual feature
• A virtual feature is the reflection by a
specular surface of another scene point
which travels over the surface when the
observer moves.
Curve Representation
• Cartesian co-ordinates result in complex
equations describing specular motion
• Using the Legendre transform to represent
the curve as an envelope of tangents
Curve Representation
2D Caustics
• When a camera moves around an object the virtual
features move on the specular surface, producing a
family of reflected rays (the envelope defined by
this family is called the caustic)
• On the other hand, the caustic of a real feature is
one single point (the actual position of the feature
in the scene where all the reflected rays intersect)
Test Image
2D Caustics
• Using this, feature
classification is simply a
matter of computing a
caustic and determining
whether it is a point or a
curve
• Features are tracked from
one frame to the next
using a sum of square
difference (SSD)
correlation operator
2D Profile Recovery
• The camera is moved in the plane of the
profile and the features are tracked
• An equation may be derived relating the
caustic to the surface profile, allowing the
recovery of the 2D profile from the image.
3D Surface Recovery
• The 3D camera motion problem will result
in an arbitrary space curve rather than a
family of curves as in the 2D case
• The 3D problem cannot but reduced to a
finite number of 2D profile problems
3D Surface Recovery
• The concept behind the derevation of the 3D
caustic curve is to decompose the caustic point
position at any given instant into two orthogonal
components
• As the camera moves along the specular object, a
virtual feature travels along the 3D profile on the
objects surface.
• It is possible to develop an equation which relates
the trajectory of the virtual feature to the surface
profile
Results
• The 2D testing involved tracking two
features on two different specular surfaces,
in both experiments the profile was
accurately estimated
• The 3D testing involved tracking a
highlight on a specular surface, the
recovered curve is in strong agreement with
the actual surface
Epipolar Geometry
________________________
Epipolar Geometry
• two cameras are
displaced from each
other by a baseline
distance
• Object point X forms
two distinct image
points x and x’
Epipolar Geometry
• Assume images formed in front of camera
to avoid inversion problem
• point (x’, y’) in the images plane from a real
point (x, y, z) may be calculated as
x’ = fx/z and y’ = fy/z
• the displacement between the locations of
image point is called the disparity
Epipolar Geometry
• the plane passing through
the two camera centres
and the object point is
called the epipolar plane
• the intersection of the
image plane and the
epipolar plane is called the
epipolar line
Generalizing Epipolar-Plane
Image Analysis on the
Spatiotemporal Surface
H. Harlyn Baker
Robert C. Bolles
SRI International
Introduction
• The technique of Epipolar-Plane Image
Analysis involves obtaining depth estimates
for a point by taking a large number of
images
• This gives a large baseline and higher
accuracy
• It also minimises the correspondence
problem
Epipolar-Plane Image Analysis
• this technique imposes the following
constraints
– the camera is moving along a linear path
– it acquires images at equal spacing as it is
moved
– the camera’s view is orthogonal to the direction
of travel
Epipolar-Plane Image Analysis
• the traditional notion of epipolar lines is
generalized to an epipolar plane
• using this, plus the fact that the camera is
always moving along a linear path and we
may conclude the a given scence feature
will always be restricted to a given epipolar
plane
Epipolar-Plane Image Analysis
The Spatiotemporal Surface
• As images are collected, they are stacked up
into a spatiotemporal surface
• as each new image is obtained its spatial
and temporal edge contours sre constructed
• using a 3D Laplacian of a 3D Gaussian
The Spatiotemporal Surface
3D Surface Estimation and Model
Construction From Specular Motion
in Image Sequences
Jiang Yu Zheng
Norihiro Abe
Kyushu Institiute of Technology
Yoshihiro Fukagawa
Torey Corporation
Introduction
• This technique reconstructs 3D models of
complex objects with specular surfaces
• The process involves rotating the object
under inspection
System Setup
Projected Highlights
• An extended light source project highlight
stripes onto the object
• The stripes gradually shift across the object
surface and pass most point once
• The specular motion is captured in epipolar-
plane images
Feature tracking
• We know how to detect corners and edge of
surface patterns
• The motion type of highlights in EPI can be used
to determine five categories of shape
– convex corner
– convex
– planer
– concave
– concave corner
EPI-Plane Images
• During the rotation, highlights will split and
merge, appear and dissapear, etc.
Results
• Using EPIs results in very accurate
reconstruction of surface shapes
Solder Joint Inspection
____________________________
Visual Inspection System for the
Classification of Solder Joints
Tae-Hyeon Kim
Young Shik Moon
Sung Han Park
Hanyang University
Kwang-Jin Yoon
LG Industrial Systems
Introduction
• Uses three layers of ring shaped LED
arrays, with different illumination angles
• Solder Joint are segemented and classified
using either their 2D features or their 3D
features
Classification of Joints
Preprocessing
• Objective is to identify
and segement the
soldered regions
• Solder is isolated both
vertically and
horozontally
Feature Extraction - 2D
• Average gray level value of I1 and I3
X1 = 1/N * Σ IK(x,y)
• Percentage of highlights of I1 and I2
X2 = 1/N * Σ U(x,y) * 100
U(x,y) = thresholded image of I1
Feature Extraction - 3D
• Shape recovery is done using a hybrid reflectance
model for all samples not in the confidence
interval
• A reflectance map is built up representing
intensity values as a function of orientation for
each illumination angle
• For each point, three intensity values are
recovered and from these and the reflectance map,
the orientation is estimated
Classification -2D
• Uses 3-Layer backpropagation neural
network
• Four input nodes for four features
• Five hidden layer nodes
• Four output nodes for four solder types
Classification - 3D
• Bayes Classifier assuming Gaussian
Distribution
Inspection System
Results
Features Class Number Correct % Incorrect
2D Good 52 98 2
2D Excess 57 100 0
2D Insuff. 44 100 0
2D None 50 100 0
2D Total 203 99.5 0.5
2D+3D Good 52 100 0
2D+3D Excess 57 100 0
2D+3D Insuff. 44 100 0
2D+3D None 50 100 0
2D+3D Total 203 100 0
Results
Features Time (s)
2D 1.86
3D 19.83

More Related Content

What's hot

Lighting and shading
Lighting and shadingLighting and shading
Lighting and shading
eshveeen
 

What's hot (19)

Image segmentation in Digital Image Processing
Image segmentation in Digital Image ProcessingImage segmentation in Digital Image Processing
Image segmentation in Digital Image Processing
 
Lec11 single view-converted
Lec11 single view-convertedLec11 single view-converted
Lec11 single view-converted
 
Lighting and shading
Lighting and shadingLighting and shading
Lighting and shading
 
Lec13 stereo converted
Lec13 stereo convertedLec13 stereo converted
Lec13 stereo converted
 
Handout optik-geometri-english
Handout optik-geometri-englishHandout optik-geometri-english
Handout optik-geometri-english
 
Edge detection
Edge detectionEdge detection
Edge detection
 
COM2304: Digital Image Fundamentals - I
COM2304: Digital Image Fundamentals - I COM2304: Digital Image Fundamentals - I
COM2304: Digital Image Fundamentals - I
 
Hidden Surface Removal using Z-buffer
Hidden Surface Removal using Z-bufferHidden Surface Removal using Z-buffer
Hidden Surface Removal using Z-buffer
 
Lec09 hough
Lec09 houghLec09 hough
Lec09 hough
 
Shading in OpenGL
Shading in OpenGLShading in OpenGL
Shading in OpenGL
 
Morphological image processing
Morphological image processingMorphological image processing
Morphological image processing
 
Structure from motion
Structure from motionStructure from motion
Structure from motion
 
Visual realism
Visual realismVisual realism
Visual realism
 
Morphological operations
Morphological operationsMorphological operations
Morphological operations
 
Image segmentation
Image segmentationImage segmentation
Image segmentation
 
GRPHICS06 - Shading
GRPHICS06 - ShadingGRPHICS06 - Shading
GRPHICS06 - Shading
 
PPT s02-machine vision-s2
PPT s02-machine vision-s2PPT s02-machine vision-s2
PPT s02-machine vision-s2
 
Modeling and optimization of high index contrast gratings with aperiodic topo...
Modeling and optimization of high index contrast gratings with aperiodic topo...Modeling and optimization of high index contrast gratings with aperiodic topo...
Modeling and optimization of high index contrast gratings with aperiodic topo...
 
Lec10 alignment
Lec10 alignmentLec10 alignment
Lec10 alignment
 

Viewers also liked

Universal Design for Learning: Diverse Learners
Universal Design for Learning: Diverse LearnersUniversal Design for Learning: Diverse Learners
Universal Design for Learning: Diverse Learners
Damian T. Gordon
 
Learning Styles for Virtual Learning Environments
Learning Styles for Virtual Learning EnvironmentsLearning Styles for Virtual Learning Environments
Learning Styles for Virtual Learning Environments
Damian T. Gordon
 
Computer Vision: Reflectance Analysis for Image Understanding
Computer Vision: Reflectance Analysis for Image UnderstandingComputer Vision: Reflectance Analysis for Image Understanding
Computer Vision: Reflectance Analysis for Image Understanding
Damian T. Gordon
 
What does your experiment look like?
What does your experiment look like? What does your experiment look like?
What does your experiment look like?
Damian T. Gordon
 
Concepts from Random Words
Concepts from Random WordsConcepts from Random Words
Concepts from Random Words
Damian T. Gordon
 
A Compendium of Creativity Tools
A Compendium of Creativity ToolsA Compendium of Creativity Tools
A Compendium of Creativity Tools
Damian T. Gordon
 
Hackers and Hollywood: Deleted scene 1
Hackers and Hollywood: Deleted scene 1 Hackers and Hollywood: Deleted scene 1
Hackers and Hollywood: Deleted scene 1
Damian T. Gordon
 
Hackers and Hollywood: Extended scene 1
Hackers and Hollywood: Extended scene 1Hackers and Hollywood: Extended scene 1
Hackers and Hollywood: Extended scene 1
Damian T. Gordon
 
Narrative ppnt minus trailers
Narrative ppnt minus trailersNarrative ppnt minus trailers
Narrative ppnt minus trailers
abcdsmile
 

Viewers also liked (20)

Universal Design for Learning: Diverse Learners
Universal Design for Learning: Diverse LearnersUniversal Design for Learning: Diverse Learners
Universal Design for Learning: Diverse Learners
 
The Only Way is Ethics
The Only Way is EthicsThe Only Way is Ethics
The Only Way is Ethics
 
Learning Styles for Virtual Learning Environments
Learning Styles for Virtual Learning EnvironmentsLearning Styles for Virtual Learning Environments
Learning Styles for Virtual Learning Environments
 
Operating Systems - Memory Management
Operating Systems - Memory ManagementOperating Systems - Memory Management
Operating Systems - Memory Management
 
Python: The Iterator Pattern (Comprehensions)
Python: The Iterator Pattern (Comprehensions)Python: The Iterator Pattern (Comprehensions)
Python: The Iterator Pattern (Comprehensions)
 
Computer Vision: Reflectance Analysis for Image Understanding
Computer Vision: Reflectance Analysis for Image UnderstandingComputer Vision: Reflectance Analysis for Image Understanding
Computer Vision: Reflectance Analysis for Image Understanding
 
The 3M Way
The 3M WayThe 3M Way
The 3M Way
 
What does your experiment look like?
What does your experiment look like? What does your experiment look like?
What does your experiment look like?
 
Why do we teach?
Why do we teach?Why do we teach?
Why do we teach?
 
Creative Commons Sites
Creative Commons SitesCreative Commons Sites
Creative Commons Sites
 
Podcasts for Postgrads
Podcasts for PostgradsPodcasts for Postgrads
Podcasts for Postgrads
 
Concepts from Random Words
Concepts from Random WordsConcepts from Random Words
Concepts from Random Words
 
A Compendium of Creativity Tools
A Compendium of Creativity ToolsA Compendium of Creativity Tools
A Compendium of Creativity Tools
 
Hackers and Hollywood: Deleted scene 1
Hackers and Hollywood: Deleted scene 1 Hackers and Hollywood: Deleted scene 1
Hackers and Hollywood: Deleted scene 1
 
Interviews FAQ
Interviews FAQInterviews FAQ
Interviews FAQ
 
Hackers and Hollywood: Extended scene 1
Hackers and Hollywood: Extended scene 1Hackers and Hollywood: Extended scene 1
Hackers and Hollywood: Extended scene 1
 
Case Study Questions
Case Study QuestionsCase Study Questions
Case Study Questions
 
Python: Polymorphism
Python: PolymorphismPython: Polymorphism
Python: Polymorphism
 
People in Software Design
People in Software DesignPeople in Software Design
People in Software Design
 
Narrative ppnt minus trailers
Narrative ppnt minus trailersNarrative ppnt minus trailers
Narrative ppnt minus trailers
 

Similar to Use of Specularities and Motion in the Extraction of Surface Shape

chapter 4 computervision PART1.pcomputerptx
chapter 4 computervision PART1.pcomputerptxchapter 4 computervision PART1.pcomputerptx
chapter 4 computervision PART1.pcomputerptx
shesnasuneer
 
Reflection and refraction at home & curved surfaces
Reflection and refraction at home & curved surfacesReflection and refraction at home & curved surfaces
Reflection and refraction at home & curved surfaces
Mohammad Arman Bin Aziz
 
Image pre processing
Image pre processingImage pre processing
Image pre processing
Ashish Kumar
 
Visible surface determination
Visible  surface determinationVisible  surface determination
Visible surface determination
Patel Punit
 

Similar to Use of Specularities and Motion in the Extraction of Surface Shape (20)

Lec03 light
Lec03 lightLec03 light
Lec03 light
 
Illumination Models & Shading
Illumination Models & ShadingIllumination Models & Shading
Illumination Models & Shading
 
illuminationmodelsshading-200501081735 (1).pdf
illuminationmodelsshading-200501081735 (1).pdfilluminationmodelsshading-200501081735 (1).pdf
illuminationmodelsshading-200501081735 (1).pdf
 
chapter 4 computervision PART1.pcomputerptx
chapter 4 computervision PART1.pcomputerptxchapter 4 computervision PART1.pcomputerptx
chapter 4 computervision PART1.pcomputerptx
 
chapter 4 computervision.PPT.pptx ABOUT COMPUTER VISION
chapter 4 computervision.PPT.pptx ABOUT COMPUTER VISIONchapter 4 computervision.PPT.pptx ABOUT COMPUTER VISION
chapter 4 computervision.PPT.pptx ABOUT COMPUTER VISION
 
Reflection and refraction at home & curved surfaces
Reflection and refraction at home & curved surfacesReflection and refraction at home & curved surfaces
Reflection and refraction at home & curved surfaces
 
Geometry of Aerial Photographs.pdf
Geometry of Aerial Photographs.pdfGeometry of Aerial Photographs.pdf
Geometry of Aerial Photographs.pdf
 
Shading methods
Shading methodsShading methods
Shading methods
 
matdid950092.pdf
matdid950092.pdfmatdid950092.pdf
matdid950092.pdf
 
chapter 4 computervision.pdf IT IS ABOUT COMUTER VISION
chapter 4 computervision.pdf IT IS ABOUT COMUTER VISIONchapter 4 computervision.pdf IT IS ABOUT COMUTER VISION
chapter 4 computervision.pdf IT IS ABOUT COMUTER VISION
 
illumination model in Computer Graphics by irru pychukar
illumination model in Computer Graphics by irru pychukarillumination model in Computer Graphics by irru pychukar
illumination model in Computer Graphics by irru pychukar
 
Image pre processing
Image pre processingImage pre processing
Image pre processing
 
Illumination model
Illumination modelIllumination model
Illumination model
 
smallpt: Global Illumination in 99 lines of C++
smallpt:  Global Illumination in 99 lines of C++smallpt:  Global Illumination in 99 lines of C++
smallpt: Global Illumination in 99 lines of C++
 
graphics notes
graphics notesgraphics notes
graphics notes
 
Visible surface determination
Visible  surface determinationVisible  surface determination
Visible surface determination
 
reflectionoflight-100829070425-phpapp02.pptx
reflectionoflight-100829070425-phpapp02.pptxreflectionoflight-100829070425-phpapp02.pptx
reflectionoflight-100829070425-phpapp02.pptx
 
Visual realism
Visual realismVisual realism
Visual realism
 
Computer Vision panoramas
Computer Vision  panoramasComputer Vision  panoramas
Computer Vision panoramas
 
Computer vision - two view geometry
Computer vision -  two view geometryComputer vision -  two view geometry
Computer vision - two view geometry
 

More from Damian T. Gordon

More from Damian T. Gordon (20)

Universal Design for Learning, Co-Designing with Students.
Universal Design for Learning, Co-Designing with Students.Universal Design for Learning, Co-Designing with Students.
Universal Design for Learning, Co-Designing with Students.
 
Introduction to Microservices
Introduction to MicroservicesIntroduction to Microservices
Introduction to Microservices
 
REST and RESTful Services
REST and RESTful ServicesREST and RESTful Services
REST and RESTful Services
 
Serverless Computing
Serverless ComputingServerless Computing
Serverless Computing
 
Cloud Identity Management
Cloud Identity ManagementCloud Identity Management
Cloud Identity Management
 
Containers and Docker
Containers and DockerContainers and Docker
Containers and Docker
 
Introduction to Cloud Computing
Introduction to Cloud ComputingIntroduction to Cloud Computing
Introduction to Cloud Computing
 
Introduction to ChatGPT
Introduction to ChatGPTIntroduction to ChatGPT
Introduction to ChatGPT
 
How to Argue Logically
How to Argue LogicallyHow to Argue Logically
How to Argue Logically
 
Evaluating Teaching: SECTIONS
Evaluating Teaching: SECTIONSEvaluating Teaching: SECTIONS
Evaluating Teaching: SECTIONS
 
Evaluating Teaching: MERLOT
Evaluating Teaching: MERLOTEvaluating Teaching: MERLOT
Evaluating Teaching: MERLOT
 
Evaluating Teaching: Anstey and Watson Rubric
Evaluating Teaching: Anstey and Watson RubricEvaluating Teaching: Anstey and Watson Rubric
Evaluating Teaching: Anstey and Watson Rubric
 
Evaluating Teaching: LORI
Evaluating Teaching: LORIEvaluating Teaching: LORI
Evaluating Teaching: LORI
 
Designing Teaching: Pause Procedure
Designing Teaching: Pause ProcedureDesigning Teaching: Pause Procedure
Designing Teaching: Pause Procedure
 
Designing Teaching: ADDIE
Designing Teaching: ADDIEDesigning Teaching: ADDIE
Designing Teaching: ADDIE
 
Designing Teaching: ASSURE
Designing Teaching: ASSUREDesigning Teaching: ASSURE
Designing Teaching: ASSURE
 
Designing Teaching: Laurilliard's Learning Types
Designing Teaching: Laurilliard's Learning TypesDesigning Teaching: Laurilliard's Learning Types
Designing Teaching: Laurilliard's Learning Types
 
Designing Teaching: Gagne's Nine Events of Instruction
Designing Teaching: Gagne's Nine Events of InstructionDesigning Teaching: Gagne's Nine Events of Instruction
Designing Teaching: Gagne's Nine Events of Instruction
 
Designing Teaching: Elaboration Theory
Designing Teaching: Elaboration TheoryDesigning Teaching: Elaboration Theory
Designing Teaching: Elaboration Theory
 
Universally Designed Learning Spaces: Some Considerations
Universally Designed Learning Spaces: Some ConsiderationsUniversally Designed Learning Spaces: Some Considerations
Universally Designed Learning Spaces: Some Considerations
 

Recently uploaded

1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
QucHHunhnh
 
Spellings Wk 3 English CAPS CARES Please Practise
Spellings Wk 3 English CAPS CARES Please PractiseSpellings Wk 3 English CAPS CARES Please Practise
Spellings Wk 3 English CAPS CARES Please Practise
AnaAcapella
 
The basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptxThe basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptx
heathfieldcps1
 

Recently uploaded (20)

Understanding Accommodations and Modifications
Understanding  Accommodations and ModificationsUnderstanding  Accommodations and Modifications
Understanding Accommodations and Modifications
 
Kodo Millet PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
Kodo Millet  PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...Kodo Millet  PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
Kodo Millet PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
 
Spatium Project Simulation student brief
Spatium Project Simulation student briefSpatium Project Simulation student brief
Spatium Project Simulation student brief
 
Python Notes for mca i year students osmania university.docx
Python Notes for mca i year students osmania university.docxPython Notes for mca i year students osmania university.docx
Python Notes for mca i year students osmania university.docx
 
ICT role in 21st century education and it's challenges.
ICT role in 21st century education and it's challenges.ICT role in 21st century education and it's challenges.
ICT role in 21st century education and it's challenges.
 
Making communications land - Are they received and understood as intended? we...
Making communications land - Are they received and understood as intended? we...Making communications land - Are they received and understood as intended? we...
Making communications land - Are they received and understood as intended? we...
 
Fostering Friendships - Enhancing Social Bonds in the Classroom
Fostering Friendships - Enhancing Social Bonds  in the ClassroomFostering Friendships - Enhancing Social Bonds  in the Classroom
Fostering Friendships - Enhancing Social Bonds in the Classroom
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
 
Spellings Wk 3 English CAPS CARES Please Practise
Spellings Wk 3 English CAPS CARES Please PractiseSpellings Wk 3 English CAPS CARES Please Practise
Spellings Wk 3 English CAPS CARES Please Practise
 
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
 
The basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptxThe basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptx
 
Food safety_Challenges food safety laboratories_.pdf
Food safety_Challenges food safety laboratories_.pdfFood safety_Challenges food safety laboratories_.pdf
Food safety_Challenges food safety laboratories_.pdf
 
Google Gemini An AI Revolution in Education.pptx
Google Gemini An AI Revolution in Education.pptxGoogle Gemini An AI Revolution in Education.pptx
Google Gemini An AI Revolution in Education.pptx
 
SKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptx
SKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptxSKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptx
SKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptx
 
General Principles of Intellectual Property: Concepts of Intellectual Proper...
General Principles of Intellectual Property: Concepts of Intellectual  Proper...General Principles of Intellectual Property: Concepts of Intellectual  Proper...
General Principles of Intellectual Property: Concepts of Intellectual Proper...
 
Micro-Scholarship, What it is, How can it help me.pdf
Micro-Scholarship, What it is, How can it help me.pdfMicro-Scholarship, What it is, How can it help me.pdf
Micro-Scholarship, What it is, How can it help me.pdf
 
SOC 101 Demonstration of Learning Presentation
SOC 101 Demonstration of Learning PresentationSOC 101 Demonstration of Learning Presentation
SOC 101 Demonstration of Learning Presentation
 
ComPTIA Overview | Comptia Security+ Book SY0-701
ComPTIA Overview | Comptia Security+ Book SY0-701ComPTIA Overview | Comptia Security+ Book SY0-701
ComPTIA Overview | Comptia Security+ Book SY0-701
 
Unit-IV; Professional Sales Representative (PSR).pptx
Unit-IV; Professional Sales Representative (PSR).pptxUnit-IV; Professional Sales Representative (PSR).pptx
Unit-IV; Professional Sales Representative (PSR).pptx
 
How to Manage Global Discount in Odoo 17 POS
How to Manage Global Discount in Odoo 17 POSHow to Manage Global Discount in Odoo 17 POS
How to Manage Global Discount in Odoo 17 POS
 

Use of Specularities and Motion in the Extraction of Surface Shape

  • 1. Use of Specularities and Motion in the Extraction of Surface Shape Damian Gordon
  • 2. Introduction • Introduction - Image Geometry • Photometric Stereo (1) • Structured Highlights (1) • Stereo Techniques (2) • Motion Techniques (3) • Solder Joint Inspection (1)
  • 3. Specular Surface • Angle of Incidence = Angle of Reflection
  • 5. Image Formation • Geometry - determines where in the image plane the projection of a point in a scene will be located • Physics of Light - determines the brightness of a point in the image plane as a function of scene illumination and surface properties
  • 7. Image Formation • The LINE OF SIGHT of a point in the scene is the line that passes through the point of interest and the centre of projection • The above model leads to image inversion, to avoid this, assume the image plane is in front of the centre of projection
  • 9. Perspective Projection • (x’,y’) may be found by computing the co- ordinates of the intersection of the line of sight, passing thru’ (x,y,z) and the image plane • By two sets of similar triangles : x’=fx/z and y’=fy/z
  • 10. Image Irradiance (Brightness) • The irradiance of a point in the image plane E(x’,y’) is determined by the amt of energy radiated by the corresponding scene in the direction of the image point : E(x’,y’)=L(x,y,z) • Two factors determine radiance emitted by a surface patch I) Illumination falling on scene patch - determined by the patch’s position relative to the distribution of light sources II) Fraction of incident illumination reflected by patch - determined by optical properties of the patch
  • 11. Image Irradiance • (θiφi) is the direction of the point source of scene illumination • (θeφe) is the direction of the energy emitted from the surface patch • E(θiφi) is the energy arriving at a patch • L(θeφe) is the energy radiated from the patch
  • 12. Image Irradiance • The relationship between radiance and irradiance may be defined as follows : L(θeφe) = f(θiφiθeφe) E(θiφi) where f(θiφiθeφe) is the bidirectional reflectance distribution function (BRDF) • BRDF - depends on optical properties of the surface
  • 13. Types of Reflectance • Lambertian Reflectance • Specular Reflectance • Hybrid Reflectance • Electron Microscopy Reflectance (not covered)
  • 14. Lambertian Reflectance • Appears equally bright from all viewing directions for a fixed illumination distribution • Does not absorb any incident illumination • BRDF is a constant (1/π)
  • 15. Lambertian Reflectance - Point Source • Perceived brightness illuminated by a distant point source L(θeφe) = Ι0/π Cos θs -- Lambert Cosine Rule • this means, a surface patch captures the most illumination if it is orientated so that the surface normal of the patch points in the direction of illumination
  • 16. Lambertian Reflectance - Uniform Source • Perceived brightness illuminated by a uniform source L(θeφe) = Ι0 • this means, no matter how a surface is illuminated, it receives the same amount of illumination
  • 17. Specular Reflectance • Reflects all incident illumination in a direction that has the same angle with respect to the surface normal, but on the oppside of the surface normal • light in the direction (θiφi) is reflected to (θeφe) = (θiφi+π) • BRDF is δ(θe-θi)δ (φe-φi-π) / Sin θi Cos θi
  • 18. Specular Reflectance • Perceived brightness is L(θeφe) = Ι0(θeφe−π) • this means, the incoming rays of light are reflected from the surface like a perfect mirrior
  • 19. Hybrid Reflectance • Mixture of Lambertian and Specular reflectance • BRDF is η/π + (1−η) ∗ δ(θe -θi)δ (φe-φi-π) / Sin θi Cos θi • where η isthe mixture of the two reflectance functions
  • 20. Surface Orientation • If (x,y,z) is a point on a surface and (x,y) is the same point on the image plane, with distance z from the camera (depth), then a nearby point is (x+δx, y+δy) • the change in depth can be expressed as δz = (ϑz/ϑx)δx + (ϑz/ϑy)δy
  • 21. Surface Orientation • The size of the partial derivaties of z with respect to x and y are related to the orientation of the surface patch. • The gradient of (x,y,z) is the vector (p,q) which is given by p = (ϑz/ϑx), q = (ϑz/ϑy)
  • 22. Reflectance Map • For a given light source distribution and a given surface material, the reflectance of all surface orientations of p and q can be catalogued or computed to yield the reflectance map R(p,q) which leads to the image irradiance equation E(x,y) = R(p,q)
  • 23. Reflectance Map • i.e., that the irradiance at a point in the image plane is equal to the reflectance map value for surface orientation p and q in the corresponding point in a scene • in other words, given a change in surface orientation, the reflectance map allows you to calculate a change in image intensity.
  • 24. Shape from Shading • the oppside problem, we know E(x,y) = R(p,q), so we need to calculate p and q for each point (x,y) in the image • Two unknows, one equation, therefore, a constraint must be applied.
  • 25. Shape from Shading • Smoothness constraint • Objects are made of a smooth surface, which depart from smoothness only along their edges • may be expressed as ∫∫ +++= ydxdqqppe yxyxs ))()(( 2222
  • 27. Photometric Stereo • Asssume a scene with Lambertian reflectance • Each point (x,y) will have brightness E(x,y) and possible orientations p and q for a given light source • if the same surface is illuminated by a point source in a different location, the reflectance map will the different
  • 28. Photometric Stereo • Using this method, surface orientation may be uniquely identified • In reality, not all incident light is radiated from a surface, this is accounted for by adding an albedo factor (ρ) into the image irradiance eqn. • E(x,y) = ρR(p,q)
  • 30. Determining Surface Orientations of Specular Surfaces by Using the Photometric Stereo Method Katsusi Ikeuchi Ministry of International Trade in Industry, Japan
  • 31. Introduction • Photometric stereo may be used to determine the surface orientation of a patch • for diffuse surfaces, point source illumination is used • for specular surfaces, a distributed light source is required
  • 32. Image Radiance • For a specular surface and an extended light source : Le(θeφe) = Li(θeφe+π) • Relationship between reflected radiance and image irradiance • Ep = {(π/4)(d/fp)2 Cos4 α}Le fp =focal length d = diameter of aperture α = off-axis angle
  • 33. Image Radiance • from this a brightness distribution may be derived • and from that an inverse transformation
  • 34. System Implementation • Two Stage Process – Off-Line Job – On-Line Job
  • 35. Off-Line Job • Light Source : Three linear lamps, placed symmetrically 120 degrees apart • Lookup Table : Could use 3D table, but observed triples often contain errors • Instead use 2D lookup Table - each element has two alternatives • Each alternative consists of a surface orientation and an instensity
  • 37. On-Line Job • Normalization is required to cancel the effect of albedo • Brightness calibration is required also • The correct alternative of the two solutions is found by comparing the distance between the actual third image brightness and the element of the matrix
  • 38. Results • Works well in a contrainted environment • has problems if the surface is not smooth
  • 39. Extracting the Shape and Roughness of Specular Lobe Objects Using Four Light Photometric Stereo Fredric Solomon Katsushi Ikeuchi Carnegie Mellon
  • 41. Structured Highlight Inspection of Specular Surfaces Arthur C. Sanderson Lee E. Weiss Shree K. Nayar Carnegie Mellon
  • 42. Introduction • Structured Highlight approach yields 3D images from point sources and images • ‘Highlight’ - light source reflected on a specular surface
  • 43. Introduction • Angle of Incidence = Angle of Reflection • A fixed camera will image a reflected light ray (highlight) only if it is positioned and orientationed correctly
  • 44. Introduction • Once a highlight is observed, if the direction of the incident ray is known, the orientation of the surface element may be found • A spherical array of fixed point light sources is used to ensure all positions and directions are scanned
  • 45. Lambertian Reflectance • The reflectance relationship for a Lambertian model of image E(x,y) E(x,y) = A (n . s) n = surface normal (unit vector) s = source direction (unit vector) A = constant related to illumination intensity and surface albedo
  • 46. Hybrid Reflectance • The reflectance relationship for a hybrid model of image E(x,y) E(x,y) = A k (n . s) + (a/2)(1-k) . [2(n . z)(n . s)-(z . s)] z = viewing direction (unit vector) k = relative weight of specular and Lambertian components n = sharpness of the specularity
  • 47. Structured Hightlight Inspection • Using the above equation, the slope of any point may be calculated • Surface orientation may be determined by the sources that produce local peaks in the reflectance map.
  • 48. Camera Models • Perspective Camera Model • Orthographic Projection Model • “Fixed” Camera Model
  • 49. Perspective Camera Model • All reflected rays pass though a focal point • this model provides very accurate measurements, but requires extensive calibration procedures
  • 50. Orthographic Projection Model • the focal point is assumed to be an infinite distance from the camera and all the reflected rays are perpendicular to the image plane
  • 51. “Fixed” Camera Model • all rays are emitted from a single point on the reflectance plane and all surface normal estimates are computed to that reference point
  • 52. Camera Models - Accuracy • Perspective Camera Model – Most accurate • “Fixed” Camera Model – Next most accurate • Orthographic Projection Model – Most sensitive to error
  • 53. SHINY - Structured Highlight INspection sYstem • Highlightrs are extracted from images and tablulated • Surface normals are computed based on lookup tables dervied from calibration experiments • Reconstruction is done using interpolation followed by smoothing
  • 54. Stereo Hightlight Algorithm • The assumption of a distant source to uniquely identify the angle of incidence of illumination is an approximation • To improve this, a second camera is used with stereo matching for greater accuracy
  • 55. Results • With two cameras need to resolve stereo matching ambiguities, therefore, need further constraints • This technique is slow (1988)
  • 57. Stereo in the Presence of Specular Reflection Dinkar N. Bhat Shree K. Nayar Columbia University
  • 58. Introduction • Stereo is a direct method of obtaining the 3D structure of the visual world • But, it suffers from the fact that the correspondence problem is inherently underconstrained
  • 59. Correspondence Problem • the most common constraint is that intensities of corresponding points in images are identical • The assumption is not valid for specular surfaces (since intensity is dependant on viewing direction)
  • 60. Specular Reflection • When a specular surface is smooth, the distribution of the specular intensity is concentrated • As the surface becomes rougher, the peak volume of the specular intensity decreases and the distribution widens
  • 62. Implications for Stereo • The total image intensity of any point is the sum of the diffuse and specular intensity conponents • Since the change in diffuse components is very small relative to the changes in specular components, it follows that the overall change in intensity is approximately equal to the specular intensity differences Idiff ~= | Is1 - Is2|
  • 63. Implications for Stereo • This approximation will assist in determining an optimal binocular stereo configuration, which minimises specular correspondence problems but maximises precision in depth estimation
  • 65. Vergence • When cameras are orientated such that their optical axes intersect at a point in space, this point is refered to as the point vergence • Depth accuracy is directly proportional to vergence (…which conflicts with the requirement to minimize intensity differences)
  • 66. Binocular Stereo • Determining the maximum acceptable vergence can be formulated as a constrained optimization problem fobj = v1 . v2 c1: Idiff < a specified threshold c2: the cameras lie in the X-Z plane
  • 67. Experiments • Two uniformly rough cylindrical objects wrapped, one is gift wrapper and the other in xerox paper • Similar patterns were marking on both
  • 68.
  • 69. Trinocular Stereo • Required in environments which are less structured and where surface roughness cannot be estimated • Allows intensity difference at a point to be constrained to a threshold in at least one of the stereo pairs
  • 71. Experiments • The experiments done indicate that the reconstruction algorithm works resonably well in an unconstrained environment
  • 72. Retrieving Shape Information from Multiple Images of a Specular Surface Howard Schultz University of Massachusetts
  • 73. Introduction • This research extends a diffuse mutli-image shape-from-shading technique to perform in the specular domain
  • 74. Viewing Geometry • Assumes an ideal camera with focal length f viewing a surface • The camera focal point is located at P and O is a point on the surface • From Snell’s Law an equation can be derived relating the objects position in space to its image on the image plane
  • 76. Image Synthesis • the specular surface stereo method requires a model that predicts accurately the irradiance at each pixel • Use Idealized Image Synthesis Model • this will allow us to determine that the irradiance is directly proportional to the product of the radiance and the reflection co-efficient
  • 77. Specular Surface Stereo • Starting at a known evelation, an iterative process is used to determine shape • Two-step process, determine orientation and propagation
  • 78. Surface Orientation • Identify the pixels that view the surface point (by calculating an inverse of a projective transform) • A value of (p,q) is found such that the predicted irradiance at E(p,q) match the observed values
  • 79. Surface Propagation • if a point is known on a surface, it is possible to recover shape by propagation • If (x,y) has elevation h and gradient (p,q) then (x+δx, y+δy) has elevation h’ = h +pδx +qδy
  • 80. Obtaining Seed Values • if there are surface features with diffuse proprties (e.g. scratchs or rough spots), use feature matching methods • if surface is smooth, use a laser range finder
  • 81. Results • Tests were done on four _simulated_ images to determine the feasibilty of the method, the results were 99% accurate • Using this method in the ‘real world’ would require more constraints
  • 83. A Theory of Specular Surface Geometry Michael Oren Shree K. Nayar Columbia University
  • 84. Introduction • Develops a 2D profile recovery technique and generalize to 3D surface recovery • Two major issues associated with specular surfaces – detection – shape recovery
  • 85. Introduction • Specular surfaces introduce a new kind of image feature, a virtual feature • A virtual feature is the reflection by a specular surface of another scene point which travels over the surface when the observer moves.
  • 86. Curve Representation • Cartesian co-ordinates result in complex equations describing specular motion • Using the Legendre transform to represent the curve as an envelope of tangents
  • 88. 2D Caustics • When a camera moves around an object the virtual features move on the specular surface, producing a family of reflected rays (the envelope defined by this family is called the caustic) • On the other hand, the caustic of a real feature is one single point (the actual position of the feature in the scene where all the reflected rays intersect)
  • 90. 2D Caustics • Using this, feature classification is simply a matter of computing a caustic and determining whether it is a point or a curve • Features are tracked from one frame to the next using a sum of square difference (SSD) correlation operator
  • 91. 2D Profile Recovery • The camera is moved in the plane of the profile and the features are tracked • An equation may be derived relating the caustic to the surface profile, allowing the recovery of the 2D profile from the image.
  • 92. 3D Surface Recovery • The 3D camera motion problem will result in an arbitrary space curve rather than a family of curves as in the 2D case • The 3D problem cannot but reduced to a finite number of 2D profile problems
  • 93. 3D Surface Recovery • The concept behind the derevation of the 3D caustic curve is to decompose the caustic point position at any given instant into two orthogonal components • As the camera moves along the specular object, a virtual feature travels along the 3D profile on the objects surface. • It is possible to develop an equation which relates the trajectory of the virtual feature to the surface profile
  • 94. Results • The 2D testing involved tracking two features on two different specular surfaces, in both experiments the profile was accurately estimated • The 3D testing involved tracking a highlight on a specular surface, the recovered curve is in strong agreement with the actual surface
  • 96. Epipolar Geometry • two cameras are displaced from each other by a baseline distance • Object point X forms two distinct image points x and x’
  • 97. Epipolar Geometry • Assume images formed in front of camera to avoid inversion problem • point (x’, y’) in the images plane from a real point (x, y, z) may be calculated as x’ = fx/z and y’ = fy/z • the displacement between the locations of image point is called the disparity
  • 98. Epipolar Geometry • the plane passing through the two camera centres and the object point is called the epipolar plane • the intersection of the image plane and the epipolar plane is called the epipolar line
  • 99. Generalizing Epipolar-Plane Image Analysis on the Spatiotemporal Surface H. Harlyn Baker Robert C. Bolles SRI International
  • 100. Introduction • The technique of Epipolar-Plane Image Analysis involves obtaining depth estimates for a point by taking a large number of images • This gives a large baseline and higher accuracy • It also minimises the correspondence problem
  • 101. Epipolar-Plane Image Analysis • this technique imposes the following constraints – the camera is moving along a linear path – it acquires images at equal spacing as it is moved – the camera’s view is orthogonal to the direction of travel
  • 102. Epipolar-Plane Image Analysis • the traditional notion of epipolar lines is generalized to an epipolar plane • using this, plus the fact that the camera is always moving along a linear path and we may conclude the a given scence feature will always be restricted to a given epipolar plane
  • 104. The Spatiotemporal Surface • As images are collected, they are stacked up into a spatiotemporal surface • as each new image is obtained its spatial and temporal edge contours sre constructed • using a 3D Laplacian of a 3D Gaussian
  • 106. 3D Surface Estimation and Model Construction From Specular Motion in Image Sequences Jiang Yu Zheng Norihiro Abe Kyushu Institiute of Technology Yoshihiro Fukagawa Torey Corporation
  • 107. Introduction • This technique reconstructs 3D models of complex objects with specular surfaces • The process involves rotating the object under inspection
  • 109. Projected Highlights • An extended light source project highlight stripes onto the object • The stripes gradually shift across the object surface and pass most point once • The specular motion is captured in epipolar- plane images
  • 110. Feature tracking • We know how to detect corners and edge of surface patterns • The motion type of highlights in EPI can be used to determine five categories of shape – convex corner – convex – planer – concave – concave corner
  • 111. EPI-Plane Images • During the rotation, highlights will split and merge, appear and dissapear, etc.
  • 112. Results • Using EPIs results in very accurate reconstruction of surface shapes
  • 114. Visual Inspection System for the Classification of Solder Joints Tae-Hyeon Kim Young Shik Moon Sung Han Park Hanyang University Kwang-Jin Yoon LG Industrial Systems
  • 115. Introduction • Uses three layers of ring shaped LED arrays, with different illumination angles • Solder Joint are segemented and classified using either their 2D features or their 3D features
  • 117. Preprocessing • Objective is to identify and segement the soldered regions • Solder is isolated both vertically and horozontally
  • 118. Feature Extraction - 2D • Average gray level value of I1 and I3 X1 = 1/N * Σ IK(x,y) • Percentage of highlights of I1 and I2 X2 = 1/N * Σ U(x,y) * 100 U(x,y) = thresholded image of I1
  • 119. Feature Extraction - 3D • Shape recovery is done using a hybrid reflectance model for all samples not in the confidence interval • A reflectance map is built up representing intensity values as a function of orientation for each illumination angle • For each point, three intensity values are recovered and from these and the reflectance map, the orientation is estimated
  • 120. Classification -2D • Uses 3-Layer backpropagation neural network • Four input nodes for four features • Five hidden layer nodes • Four output nodes for four solder types
  • 121. Classification - 3D • Bayes Classifier assuming Gaussian Distribution
  • 123. Results Features Class Number Correct % Incorrect 2D Good 52 98 2 2D Excess 57 100 0 2D Insuff. 44 100 0 2D None 50 100 0 2D Total 203 99.5 0.5 2D+3D Good 52 100 0 2D+3D Excess 57 100 0 2D+3D Insuff. 44 100 0 2D+3D None 50 100 0 2D+3D Total 203 100 0