2. INTRODUCTION
Edges are significant local
changes of intensity in an
image.
Edge Detection is the
process of identifying and
locating sharp discontinuities
in an image.
Abrupt change in pixel
intensity characterize
boundary of an object and
usually edges occur on the
boundary of two regions.
Tulips image
Edges of the Tulips image
3. Tulips Image Part of the image Edge of the part of the image
Matrix generated by the part of the image
4. CAUSES OF INTENSITY CHANGE
Geometric events
Discontinuity in depth and
surface colour and texture
Non-geometric events
Reflection of light
Illumination
shadows
Edge formation due to
discontinuity of surface
Reflectance Illumination Shadow
5. APPLICATIONS
Enhancement of noisy images like satellite images,
x-rays, medical images like cat scans.
Text detection.
Traffic management.
Mapping of roads.
Video surveillance.
6. DIFFERENT TYPES OF EDGES OR
INTENSITY CHANGES
Step edge: The image intensity abruptly changes from
one value on one side of the discontinuity to a different
value on the opposite side.
7. Ramp edge: A step edge where the intensity change is
not instantaneous but occur over a finite distance.
Ridge edge: The image intensity abruptly changes value
but then returns to the starting value within some short
distance (i.e., usually generated by lines).
8. Roof edge: A ridge edge where the intensity change is
not instantaneous but occur over a finite distance (i.e.,
usually generated by the intersection of two surfaces).
9. MAIN STEPS IN EDGE DETECTION
Smoothing: Suppress as much noise as possible,
without destroying true edges.
Enhancement: Apply differentiation to enhance the
quality of edges (i.e., sharpening).
Thresholding: Determine which edge pixels should be
discarded as noise and which should be retained (i.e.,
threshold edge magnitude).
Localization: Determine the exact edge location. Edge
thinning and linking are usually required in this step.
11. GRADIENT REPRESENTATION
The gradient is a vector which has magnitude and
direction.
or
Magnitude: indicates edge strength.
Direction: indicates edge direction.
| | | |
f f
x y
(approximation)
13. GENERAL APPROXIMATION
Consider the arrangement of pixels about the pixel [i, j]:
The partial derivatives , can be computed by:
The constant c implies the emphasis given to pixels
closer to the centre of the mask.
3 x 3 neighborhood:
26. CONCLUSION
Reduces unnecessary information in the image while
preserving the structure of the image.
Extract important features of an image like corners, lines
and curves.
Recognize objects, boundaries, segmentation.
Part of computer vision and recognition.
Sobel and prewitt operators are similar.
27. REFERENCES
Machine Vision – Ramesh Jain, Rangachar Kasturi, Brian
G Schunck, McGraw-Hill, 1995
A Computational Approach to Edge Detection – John
Canny, IEEE, 1986
CS485/685 Computer Vision – Dr. George Bebis