SlideShare a Scribd company logo
1 of 99
Download to read offline
Introductory Level
SLAM Seminar
Jan. 18, 2018
Dong-Won Shin
• Dong-Won Shin
• Gwangju Institute of Science and Technology
• PhD candidate in Computer Science Field
• SLAM Research Group KR manager
• (blog) dongwonshin.net
• (github) github.com/JustWon
• (E-mail) dongwonshin@gist.ac.kr
Speaker
2
• Introduction to SLAM
• Traditional SLAM approaches
• Recent SLAM approaches
• Exercise
Contents
3
Introduction to SLAM
4
• What is SLAM?
• Computational problem of constructing a map of an environment
while simultaneously keeping track of a robot’s location
• Application
Simultaneous Localization and Mapping
Augmented	reality Virtual	reality Robotics
https://en.wikipedia.org/wiki/Simultaneous_localization_and_mapping
5
• Visual localization
• Under the inaccurate GPS
• GPS-denied environment
• Ex)
• Mapping
• Scenarios in which a prior map is not available and needs to be built.
• Map can inform path planning or provide an intuitive visualization for a human or robot.
• Ex)
Simultaneous Localization and Mapping
6
C.	Cadena	et	al.,	“Past,	Present,	and	Future	of	Simultaneous	Localization	And	Mapping:	Towards	the	Robust-Perception	Age,”	IEEE	Trans.	Robot.,	vol.	32,	no.	
6,	pp.	1309–1332,	2016.
Indoor	environment Skyscraper	
Disaster	areaPrivate	room
• Bayesian Filtering based SLAM
• Theoretical fundamental and prototype of traditional
Bayesian filtering based SLAM framework emerged in 1900s.
• Ex) EKF SLAM, FastSLAM
• Visual odometry
• The process of estimating the ego-motion of a robot using
only the input of a single or multiple cameras attached to it.
• Ex) stereo VO, monocular VO
• Structure from motion
• Investigating the problem of recovering relative camera poses
and 3D structure from a set of camera images
• Off-line version of visual SLAM
Earlier Inspirations
7
• The solution to the large scale map management
• 1) graph based slam and loop closure detection
• 2) efficient map representation and refinement: sparse, dense, and semi-dense
• Graph based SLAM
• constructing a graph whose nodes represent robot poses or landmarks
• edge between nodes encodes a sensor measurement that constrains connected poses
• Loop closure detection
• Detecting loop closures in a map to give additional constraint for the consistent mapping
Effort towards Large Scale Mapping
8
• Sensor input
• Visual odometry
• Backend optimization
• Loop closure detection
• Mapping
전통적인 Visual SLAM 프레임워크
9
Sensor	input Visual	odometry
Backend	
optimization
Mapping
Loop	closing
• Frontend
• 인접한 이미지 사이의 카메라 움직임
• 카메라의 동작에 대한 정량적인 측정이 필요
• 회전 및 이동
• 카메라와 3차원 공간 점 사이의 기하학적 관계를 이해
• Drift error
• Visual odometry만을 이용하여 궤적을 추정하면 에러가 누적됨
• 해결책
• Backend optimization과 loop closure detection의 필요
Visual Odometry
10
• Sensor noise
• 정밀한 센서라도 노이즈는 있기 마련
• 저렴한 센서는 측정 오류가 큼
• 일부 센서는 자기장과 온도에 영향을 받음
• Backend 최적화
• 노이즈가 많은 데이터로부터 전체 시스템의 상태를 추정하는 방법 (state estimation)
• Frontend에서 발생한 Drift 에러를 제거
• Frontend에서는 backend에 최적화 할 데이터와 데이터의 초기 값을 제공
• Backend 최적화의 종류
• 필터 기반 (Kalman filter, particle filter)
• 비선형 최적화 기반 (bundle adjustment, pose graph optimization)
Backend Optimization
11
• 현재 위치가 이전에 방문한 곳인지를 판단
• QR 코드 활용
• 환경적인 제약이 존재
• 카메라 영상을 활용
• 이미지 간의 유사성을 판단
• Backend에서는 루프 폐쇄에 대한 정보를 받아서 위치 추정과 지도 작성에 반영
• Drift 에러를 제거하고 전역적으로 일관된 맵을 생성
Loop Closure Detection
12 T.	Sattler	et	al.,	“Benchmarking	6DOF	Outdoor	Visual	Localization	in	Changing	Conditions,”	2017.
• 환경 맵을 만드는 과정
• 응용 프로그램에 따라 다르게 구현
• 무인 청소 로봇
• 2차원 지도로 충분
• 자율주행 드론
• 6DOF이므로 3차원 지도가 필요
• Map representation
• Pointcloud
• Voxel
• Surfel
• Occupancy grid map
• …
• 맵 표현 밀도에 따른 종류
• Sparse, dense, semi-dense map
Mapping
13
• Sparse SLAM
• Only use a small selected subset of the pixels (features) from a monocular color camera
• Fast and real time on CPU but it produces a sparse map (point clouds)
• Landmark-based or feature-based representations
• ORB SLAM
• Complete SLAM system for monocular camera
• Real-time on standard CPUs in a wide variety of environments
• small hand-held indoors
• drones flying in industrial environments
• cars driving around a city
Modern State of the Art Systems
14
• Dense SLAM
• Use most or all of the pixels in each received frame
• Or use depth images from a depth camera
• It produces a dense map but GPU acceleration is necessary for the real-time operation.
• Volumetric model or surfel-based representations
• KinectFusion
• Modelling of natural scenes with only commodity sensor and GPU
• Real-time markerless tracking
• Dense surface reconstruction
Modern State of the Art Systems
15
• Lidar SLAM
• Make use of the Lidar sensor input for the localization and mapping
• Autonomous driving purpose-oriented in outdoor environment
• LOAM
• Very low drift error using the edge and planar features
• Low computation complexity
Modern State of the Art Systems
16
Investments in SLAM Field
17
SLAMcore Scape	Technology
https://www.mmcventures.com/mmc-ventures-backs-slamcore-in-5-million-seed-funding-2/
https://techcrunch.com/2019/01/08/scape/
Traditional SLAM
Approaches
18
ORB SLAM
19
• Video: https://www.youtube.com/watch?v=ufvPS5wJAx0
• Feature-based monocular SLAM system
• operates in real time, in small and large, indoor and outdoor environments
• URL: https://github.com/raulmur/ORB_SLAM2
• Main components
• Tracking
• Local mapping
• Loop closing
• Data structures
• Map points
• Keyframes
• Covisibility graph
• Essential graph
ORB SLAM
20
Map	points	&	keyframes Covisibility graph Essential	graph
R.	Mur-Artal,	J.	M.	M.	Montiel,	and	J.	D.	Tardos,	“ORB-SLAM:	A	Versatile	and	Accurate	Monocular	SLAM	System,”	IEEE	Trans.	Robot.,	vol.	31,	no.	5,	pp.	1147–1
163,	2015.
• Map points 𝑝"
• 3D position 𝐗$,"
• Viewing direction 𝐧"
• Representative ORB descriptor 𝐃"
• Keyframes 𝐾"
• Camera pose 𝑻"$
• Camera intrinsics (focal length and principal point)
• All the ORB features extracted in the frame
Data structures
21
Map	points	&	keyframes
• Covisibility graph
• Undirected weighted graph
• Node: keyframe
• Edge: if two keyframes share observations of
the same map points (at least 15)
• Weight 𝜃: the number of common map points
• Essential graph
• Retain all the nodes but less edges
• Subset of edges from the covisibility graph with high covisibility
+ loop closure edges
Data structures
22
Covisibility graph
Essential	graph
• ORB feature extraction
• For tracking, mapping, and place recognition tasks
• Robust to rotation and scale
• Good invariance to camera auto-gain and auto-exposure, and illumination changes
• Fast to extract and match allowing for real-time operation
• Show good precision/recall performances in bag-of-word place recognition
Tracking
23
E.	Rublee,	V.	Rabaud,	K.	Konolige,	and	G.	R.	Bradski,	“ORB:	An	efficient	alternative	to	SIFT	or	SURF.,”	ICCV,	pp.	2564–2571,	Jan.	2011.
Example	of	orb	features
• Initial pose estimation
• Case 1: if tracking was successful for the last frame
• constant velocity motion model
• Bundle adjustment in two view case
Tracking
24
Minimizing	the	reprojection error
https://cs.nyu.edu/~fergus/teaching/vision/11_12_multiview.pdf
t1
t2
t3
• Initial pose estimation
• Case 2: if the tracking is lost
• global relocalization
• Bag-of-visual-words
• Converting an image into a representative descriptor
• Perspective-N-Point problem
• Finding 2D—3D correspondences
Tracking
25
• New keyframe decision
• (condition 1) Good relocalization
• more than 20 frames must have passed from the last global relocalization
• (condition 2) Last keyframe insertion
• more than 20 frames have passed from the last keyframe insertion
• (condition 3) Good tracking
• Current frame tracks at least 50 points
Tracking
26
Global
Relocalization
New	
Keyframe...	…	…
20	frames
Last	
Keyframe
New	
Keyframe...	…	…
20	frames
Last	
frame
New	
Keyframe
…
…
At	least	50	points
• Keyframe insertion
• Compute the bags of words representation
• Update the covisibility graph
• New map point creation
• Finding the matches by the epipolar geometry
• Initial map point creation via triangulation
• Consistency check
• Parallax
• Positive depth in both cameras
• Reprojection error
Local Mapping
27
Feature	vectorKeyframe
Covisibility graph
• Local keyframe culling
• To maintain a compact reconstruction, detect redundant keyframes and delete them
• Redundency check
• Keyframe whose 90% of the map points have
been seen in at least other three keyframes
• Recent map point culling
• To retain the compact map
• Association check
•
• Local bundle adjustment
• Local map points
• K"	: Currently processed keyframe
• K-	: All the keyframes connected to it in the covisibility graph
• All the map points seen by those keyframes
Local Mapping
28
(#	frames	the	matching	is	found)
(#	frames	the	map	point	is	visible)
> 0.25
matching
found
matching
not found
• Loop candidates detection
• Compute the similarity between the bag of words vector
• Current keyframe and all its neighbors in the covisibility graph
• Query the recognition DB
• Compute the similarity transformation
• 7 DOF (3 rotations, 3 translations, scale)
• Scale drift over time in a monocular SLAM
• Similarity transform between current keyframe and the loop closure candidate
Loop Closing
29
keyframe1
keyframe2
Keyframe	N
……
Keyframe	DB
current
keyframe
query
https://www.youtube.com/watch?v=v9lGUGRehkM
• Loop fusion
• When the loop closure detection is triggered, the loop fusion is performed.
• Keyframe and map point correction
• Insert loop edges in the covisibility graph
• Current keyframe pose is corrected with the similarity transformation that we obtained
• Fuse duplicated map points
• Essential graph optimization
• For all keyframes and all map points
Loop Closing
30
Essential	graph
Applications
31
• AR
https://www.youtube.com/watch?v=kPwy8yA4CKM
• Video: https://youtu.be/quGhaggn3cQ
KinectFusion
32
KinectFusion
• Dense RGB-D SLAM system
• Only the depth data from Kinect is used
• to track the 3D pose of the sensor
• to reconstruct 3D models of the physical scene in real-time
• GPU-based pipeline
• Application
• Core system for low-cost handheld scanning
• Geometry-aware augmented reality
• Physics-based interactions
• Real-time multi-touch interactions anywhere
33
• Depth to vertex
• Transform each pixel of a new depth image into a 3D point (vertex)
• 𝑣" 𝑢 = 𝐷" 𝑢 𝐾78
[𝑢, 1]
• Vertex to normal
• Compute the normal vectors for each vertex of a point cloud
• Normals are used in the projective data association step of the ICP algorithm
• 𝑛" 𝑢 = (𝑣" 𝑥 + 1, 𝑦 − 𝑣" 𝑥, 𝑦 )×(𝑣" 𝑥, 𝑦 + 1 − 𝑣" 𝑥, 𝑦 )
Depth Map Conversion
34
Camera Tracking
35
• Iterative Closest Points (ICP)
• Widely used for geometric alignment of three-dimensional models
• Start with two meshes and an initial guess for their relative rigid-body transform
• Refine the transform by repeatedly generating pairs of corresponding points
• Methods
• Point to Point
• Point to Plane
• …
• Minimize a perpendicular distance from the source point to tangent plane of
destination point
• Nonlinear least square algorithm using Levenberg-Marquardt method
Point-to-Plane ICP
36
K.	Low,	“Linear	Least-squares	Optimization	for	Point-to-plane	ICP	Surface	Registration,”	Chapel	Hill,	Univ.	North	Carolina,	no.	February,	pp.	2–4,	2004.
𝑠" = (𝑠"E, 𝑠"F, 𝑠"G, 1)H
𝑑" = (𝑑"E, 𝑑"F, 𝑑"G, 1)H
𝑛" = (𝑛"E, 𝑛"F, 𝑛"G, 0)H
Source	point	s
Destination	point	d
Unit	normal	vector	at	d
• Voxel
• A value on a regular grid in 3D space, the extended concept of 2D pixel
• Widely used for realistic rendering 3D object in computer graphics
• Data
• Truncated signed distance function (TSDF) value
• Truncated signed distance function (TSDF) weight
Volumetric Representation
37
• Truncated signed distance function (TSDF)
• Predefined 3D volume is subdivided uniformly into a 3D grid of voxels
• These values are positive in-front of the surface and negative behind
• Zero-crossing point means the surface of the object
Volumetric Representation
38
• TSDF integration
• If a voxel is behind the surface observed in the new depth image,
the image does not contain any new information about it, and the function returns.
• If the voxel is close to or in front of the observed surface,
a corresponding observation is added to the accumulated sum.
Volumetric Integration
39 S.	Izadi,	D.	Kim,	O.	Hilliges,	and	…	D.	M.,	“KinectFusion:	real-time	3D	reconstruction	and	interaction	using	a	moving	depth	camera,”	in	Proceedings	of	the	24th	…,	2
011.
• Motivation
• Depth image is computed from the updated 3D world model given a camera pose
• Input to the tracking step in the next frame and also for visualization.
• Raycasting
• A ray is being cast from the camera up until an intersection with the surface is found
• Checking the value of the TSDF at each voxel along the ray until a zero-crossing is found
Raycasting
40
• Augmented reality
Application
41
• Video: https://youtu.be/8ezyhTAEyHs
LOAM
42
• Low-drift and real-time lidar odometry and mapping
• Github: https://github.com/laboshinl/loam_velodyne
• Main components
• LiDAR odometry
• Compute the motion of the Lidar between two consecutive sweeps at a frequency of 10Hz
• LiDAR mapping
• Matches and registers the local cloud onto a map at a frequency of 1Hz
LOAM
43
J.	Zhang	and	S.	Singh,	“Low-drift	and	real-time	lidar odometry and	mapping,”	Auton.	Robots,	vol.	41,	no.	2,	pp.	401–416,	2017.
• Feature point extraction
• Edge features
• Planar features
• Curvature
• Smoothness of the local surface
• Sort the points based on the c values
• Classification
• Planar features : from the minimum c
• Edge features: from the maximum c
LiDAR Odometry
44
Sorted	points
Edge	features
Planar	features
maximum
minimum
• Outlier rejection
• (a) Points on local planar surfaces that are roughly parallel to the laser beams
• its neighborhood form a surface patch whose normal is within 10 degree to the laser beam
• (b) Points that are on boundary of occluded regions
• its neighborhood disconnected from the point by a gap in the direction of the laser beam
LiDAR Odometry
45
• Finding feature point correspondence
• For edge features in the current point clouds,
• Find edge lines in the previous point clouds
• Point-to-edge distance
• For planar features in the current point clouds,
• Find planar patches in the previous point clouds
• Point-to-plane distance
LiDAR Odometry
46
• Motion estimation
• Compute residuals according to
• point-to-edge distance
• point-to-plane distance
• Establish Jacobian matrix J
• Levenberg-Marquardt method
• Bisquare weight
LiDAR Odometry
47
,	where
Least	square
• Pseudo code
LiDAR Odometry
48
For	edge	feature
For	planar	feature
Optimization
Feature	classification
• Mapping current local point cloud to the map
• Store the point cloud on the map in 10 m cubic area at a current position
• Extract feature points in the same way (edge and planar features)
• Find correspondences
• S’: neighboring points in a certain region around the feature points
• Covariance matrix M
• Eigendecomposition
• Edge line: one eigenvalue is significantly larger than the other two
• Planar patch: two large eigenvalues with the third one significantly smaller
LiDAR Mapping
49
without	
Lidar	mapping
with	
Lidar	mapping
• Visual-lidar Odometry and Mapping: Low-drift, Robust, and Fast
• Authors: Ji Zhang and Sanjiv Singh
• Main components
• Visual odometry
• At high frame-rate (60hz) to estimate a motion
• Lidar odometry
• At low frequency (1hz) to refine motion estimate
V-LOAM
50
• Depth map registration
• World map points are projected to the color camera view.
• Rotation and translation of previous frame is necessary.
• Only points in front of the camera are kept,
point that a received a certain time ago are forgotten.
• Projected depth map
• Visual feature associated with the distance
Visual Odometry
51
color	image point	cloud	&	visual	features
• Frame to Frame motion estimation
• Known distance case
• Unknown distance case
• Levenberg-Marquardt method
Visual Odometry
52
*	In	this	case,	only	the	depth	map	at	
(k-1)	frame	is	always	available.
• Sweep to Sweep refinement
• Extract geometric features like edge points and planar points
• Compute curvature in local scans
• Outlier rejection
• Find correspondences
• Motion estimation with the nonlinear optimization
Lidar Odometry
53
• Sweep to Map registration
• Add undistorted point cloud to the map
• Find correspondences
• Examining distributions of local point clusters
• Edge segment: one large and two small eigenvalues
• Planar patch: two large and one small eigenvalues
Lidar Odometry
54
• http://www.kaarta.com
KAARTA
55
• http://www.kaarta.com
KAARTA
56
Recent SLAM
Approaches
57
• Video: https://youtu.be/CrfG4v25B8k
MapLab
58
• maplab: An Open Framework for Research in Visual-inertial Mapping and Localization
• Main contribution
• General purpose visual-inertial mapping framework with multi-session support
• Robust visual-inertial estimator tightly coupled with a localization system
• Extensibility of the system that makes it well suited for research
• Main components
• ROVIOLI: VIO and localization frontend
• Maplab-console
MapLab
59
• Robust visual-inertial odometry with localization integration
• Mapping and localization frontend
• Build maps from raw visual and inertial data
• Localize with respect to existing maps online
• Feature tracking module
• Track BRISK or FREAK keypoints
• Correspondences by matching descriptors from frame to frame
• Frame localization
• 2d-3d correspondences against the provided localization map
• P3P algorithm within a RANSAC scheme
• ROVIO
• Map builder
ROVIOLI
60
• Console user interface to manipulate maps
• Multiple maps can be loaded into the console simultaneously.
• All algorithms are available through console commands.
• Parameters specific to each algorithm are set by console flags
• Real-time visualization of the map in Rviz
• It is possible to combine multiple algorithms and experiment with entire pipelines.
• Changes can be easily reverted by saving and reloading intermediate states
MapLab Console
61
https://github.com/ethz-asl/maplab/wiki
• VI-map: Visual-inertial mapping data
• Raw measurements of all sensors
• Sparse reconstruction of the covered environment
• Mission (submap)
• Vertex: state captured at a certain point in time
• Edge: connects two neighboring vertices
• IMU edge
• Loop-closure edge
Map Structure
62
• For many robotic applications
• Very important to have access to drift-free global pose estimates
• Example
• Robotic manipulation
• precise navigation
• Teach & repeat scenarios
• ROVIOLI to create an initial VI-map
• Sensor data from
• Offline in a Rosbag
• Online using ROS topics
• VI-map is automatically
• loop-closed
• Optimized
• Keyframed
• Summarized
Online Mapping
63
• In many mapping applications,
• It is not possible to cover the entire environment within a single mapping session
• Maplab offers tools to co-register maps from multiple sessions together
Multi-session Mapping
64
• Large-scale, multi-session map of the old town of Zurich
• Raw visual-inertial data recorded in 45 sessions on two different days
• Initial open loop maps on Google Tango tablets with ROVIOLI
• Global consistent map on a desktop with 32 GB RAM
• Total duration: 231 min
• Total length: 16.48 km
• Map size: 480 MB
Large-Scale Mapping
65
• Dense 3d representation of the environment is required.
• path planning
• inspection and object detection
• Collision, obstacle avoidance
• Stereo dense reconstruction
• semi-global stereo matching
• TSDF-based depth fusion
• Voxblox (volumetric mapping library)
Dense Reconstruction
66
• Most of the basic tools needed in visual-inertial mapping and localization
• A rich set of helper functions, queries, and manipulation tools
• Easy integration of new algorithms into the system
• Extensive examples demonstrating how to extend the framework
• Related researches
Using MapLab for Research
67
Topomap: Topological Mapping and
Navigation Based on Visual SLAM Map
Collaborative	Navigation	
for	Flying	and	Walking	Robots
Real-time	visual-inertial	mapping,	
re-localization	and	planning	onboard	
mavs in	unknown	environments
Appearance-based	landmark	selection
for	efficient	long-term	visual	localization
Keep	it	brief:	Scalable	creation	of	
compressed	localization	maps
The	gist	of	maps-summarizing	
experience	for	lifelong	localization
Real-time	visual-inertial	localization	
for	aerial	and	ground	robots
Map	API-scalable	decentralized	
map	building	for	robots
https://github.com/ethz-asl/maplab/wiki/Related-Research
• Video: https://www.youtube.com/watch?v=gmfKzTIKyww
InfiniTam
68
• Real-time, large-scale depth fusion and tracking framework
• URL: https://github.com/victorprad/InfiniTAM
• Main components
• Tracking
• Allocation
• Integration
• Raycasting
• Relocalisation & Loop Closure Detection
• Data structure
• Volumetric representation
• Voxel block hashing
InfiniTam
69
V.	A.	Prisacariu et	al.,	“InfiniTAM v3:	A	Framework	for	Large-Scale	3D	Reconstruction	with	Loop	Closure,”	2017.
• Volumetric representation using a hash lookup
• Data structures and operations
• Voxel
• Voxel block hash
• Hash table and hashing function
• Hash table operations
• Voxel
• A value on a regular grid in 3D space, the extended concept of 2D pixel
• Widely used for realistic rendering 3D object in computer graphics
• Data
• TSDF value
• TSDF Weight
• Color value
• Color weight
Volumetric Representation
70
• Voxel block array
• Majority of data stored in the regular voxel grid is marked either as
• Free space
• Unobserved space
• Only store the surface data by efficient hashing scheme
• Grouped voxels in blocks of predefined size (ex. 8x8x8)
• Data
• Positions of the corner of the 8x8x8 voxel block
• Offset in the excess list
• Pointer to the voxel block array
Volumetric Representation
71
M.	Nießner,	M.	Zollhöfer,	S.	Izadi,	and	M.	Stamminger,	“Real-time	3D	reconstruction	at	scale	using	voxel	hashing,”	ACM	Trans.	Graph.,	vol.	32,	no.	6,	pp.	1–11,	
2013.
• Hash table
• To quickly and efficiently find the position of a certain voxel block in the voxel block array
• Hashing function
• For locating entries of the hash table takes the corner coordinates of a 3D voxel block
• Hash collision case
• Use the additional unordered excess list
• Store an offset in the voxel block array
Volumetric Representation
72
• To determine the pose of a new camera frame given the 3D world model
• Diverse methods
• Using only the depth
• Inspired by Point-to-Plane ICP
• Using only the color
• Inspired by the direct method
• Using both data
• Utilize both approaches
• Main differences of the extended tracker
• Huber-norm instead of the standard L2 norm
• Error term weighted by its depth measurement
Tracking
73
Least	square Huber
• Three main stages in allocation
• 1) backproject a line connecting 𝑑 − 𝜇 to 𝑑 + 𝜇
• 𝑑	: depth in image coordinates
• 𝜇: a fixed, tunable parameter
• This leads to a line in world coordinates, which intersects a number of voxel blocks.
• Search the hash table for each of these blocks and look for a free hash entry
• 2) allocate voxel blocks for each non zero entry in the allocation and visibility arrays
• 3) build a list of live hash entries
Allocation
74
d+μ
d
d-μ
virtual voxel
block grid
• TSDF integration
• If a voxel is behind the surface observed in the new depth image,
the image does not contain any new information about it, and the function returns.
• If the voxel is close to or in front of the observed surface,
a corresponding observation is added to the accumulated sum.
Integration
75 S.	Izadi,	D.	Kim,	O.	Hilliges,	and	…	D.	M.,	“KinectFusion:	real-time	3D	reconstruction	and	interaction	using	a	moving	depth	camera,”	in	Proceedings	of	the	24th	…,	2
011.
• Motivation
• Depth image is computed from the updated 3D world model given a camera pose
• Input to the tracking step in the next frame and also for visualization.
• Raycasting
• A ray is being cast from the camera up until an intersection with the surface is found
• Checking the value of the TSDF at each voxel along the ray until a zero-crossing is found
• State machine to efficiently handle the sparse volumetric space
• SEARCH_BLOCK_COARSE
• SEARCH_BLOCK_FINE
• SEARCH_SURFACE
• BEHIND_SURFACE
Raycasting
76
• Keyframe-based random ferns relocaliser
• To relocalise the camera when tracking fails
• To detect loop closures when aiming to construct a globally-consistent scene
• Procedure
• Downsample and preprocess image
• Each of m code blocks is obtained by applying a random fern to I
• Fern is a set of n binary feature tests on the image, each yielding either 0 or 1
• 𝑏LM
N
∈ 𝐵Q
denote the n-bit binary code resulting from applying fern 𝐹S to I
• 𝑏T
N
∈ 𝐵UQ
denote the result of concatenating all m such binary codes for I
• Dissimilarity measure between two different images I and J
as the block-wise Hamming distance between 𝑏T
N
and 𝑏T
V
:
where is 0 if the two code blocks are identical, and 1 otherwise
Relocalisation & Loop Closure Detection
77
• Submap based approach
• Division of the scene into multiple rigid submaps
• Active submaps: tracked against at each frame
• Passive submaps: maintained, but not tracked against unless they become active again
Globally-Consistent Reconstruction
78
N.	Fioraio,	J.	Taylor,	A.	Fitzgibbon,	L.	Di	Stefano,	and	S.	Izadi,	“Large-scale	and	drift-free	surface	reconstruction	using	online	subvolume registration,”	2015	IEEE	Co
nf.	Comput.	Vis.	Pattern	Recognit.,	pp.	4475–4483,	Jan.	2015.
• Graph representation
• Numerical solution of the equation can be obtained by using popular Gauss-Newton
Pose Graph Optimization
79
e8X eXY
eYZ
eZ[
e8
e[X
measurement estimation
G.	Grisetti,	R.	Kummerle,	C.	Stachniss,	and	W.	Burgard,	“A	tutorial	on	graph-based	SLAM,”	IEEE	Intell.	Transp.	Syst.	Mag.,	vol.	2,	no.	4,	pp.	31–43,	2010.
• AR
Applications
80
https://www.youtube.com/watch?v=gmfKzTIKyww
SegMap
81
https://www.youtube.com/watch?v=cHfs3HLzc2Y
• Incremental method for localization in 3D point clouds based on segment matching
• URL: https://github.com/ethz-asl/segmap
• Main components
• Front-end
• Sequential factors
• Place recognition factors
• Back-end
• Pose graph optimization
SegMap
82
• The front-end is responsible for
• Sensor measurements into sequential factors
• Segmentation and description for place recognition factors
• Sequential factors
• Odometry factors
• Displacement between consecutive robot poses from IMU data
• Scan-matching factors
• Registering the current scan against a local map by point-to-plane ICP
• Place recognition factors
• SegMatch: segment based loop-closure for 3D point clouds
Front-End
83
• Place recognition factors
• SegMatch: segment based loop-closure for 3D point clouds
• Four different modules
• Point cloud segmentation
• Descriptor extraction
• Segment matching
• Geometric verification
Front-End
84
• Point cloud segmentation
• Incremental region growing policy
• CanGrowTo
• If growing from a seed to a neighbor is allowed
• Check the angle between seed and candidate normals
• LinkClusters
• Link clusters if they have the same cluster id
• CanBeSeed
• If a point can be used as seed
• Check the curvature at a point
SegMatch
85
Region	growing Cluster	merging
• Descriptor extraction
• For compressing the raw segment data and build object signatures
• Diverse segment descriptors
• Eigenvalue based
• Eigenvalues for the segment are computed and combined in a 7 dim feature vector
• Linearity, planarity, scattering, omnivariance, anisotropy, eigenentropy, change of curvature
SegMatch
86
M.	Weinmann,	B.	Jutzi,	and	C.	Mallet,	“Semantic	3D	scene	interpretation:	A	framework	combining	optimal	neighborhood	size	selection	with	relevant	fea
tures,”	ISPRS	Ann.	Photogramm.	Remote	Sens.	Spat.	Inf.	Sci.,	vol.	II-3,	no.	September,	pp.	181–188,	2014.
Eigenvalue-based	3D	features
• Diverse segment descriptors
• Auto-encoder based
• Input: 3D binary voxel grid of fixed dimension 32x32x16
• Descriptor extractor part
• 3D convolutional layers with max pool layers and two Fully connected layers
• Rectified linear activation function for all layers
• Output: 64x1 descriptor
• Reconstruction part
• One fully connected layer and three deconvolutional layers with a final sigmoid output
• Output: reconstructed 3D binary voxel grid
• Loss
• Classification loss: softmax cross entropy loss
• Reconstruction loss: binary cross entropy loss
SegMatch
87
R.	Dubé,	A.	Cramariuc,	D.	Dugas,	J.	Nieto,	R.	Siegwart,	and	C.	Cadena,	“SegMap:	3D	Segment	Mapping	using	Data-Driven	Descriptors,”	2018.
• Geometric verification
• Consistency graph G=(V,E)
• Vertex V = {𝑐"}, the set of correspondences 𝑐"
• Edge E = {𝑒"_} the set of undirected edges 𝑒"_ connecting all consistent pairs of correspo
ndences (𝑐", 𝑐_)
• Geometrically consistent
• If the difference of the Euclidean distance between the segment centroid in the local map and i
n the target map is less than a threshold
• Identifying a maximum geometrically consistent set == finding a maximum clique of G
SegMatch
88
|𝑑a(𝑐", 𝑐_) − 𝑑b(𝑐", 𝑐_)| ≤ 𝜖
• Illustration
Consistency Graph
89
Local
map
Target
map
c1 c2
c3
c4
c5
c6
c7
c8
c9
c10 c11
c12
Local
map
Target
map
c1 c2
c3
c4
c5
c6
c7
c8
c9
c10 c11
c12
Local
map
Target
map
c1 c2
c3
c4
c5
c6
c7
c8
c9
c10 c11
c12
Local
map
Target
map
c1 c2
c3
c4
c5
c6
c7
c8
c9
c10 c11
c12
Local
map
Target
map
c2
c3 c5
c6
c7
c8
c9
c10 c11
Back-End
• Pose graph optimization by factor graphs (iSAM library)
• Factor graphs
• Graphical models that are well suited to modeling complex estimation problems
• Variables
• Unknown random variables in the estimation problem
• Robot poses
• Factors
• Probabilistic information on those variables, derived from measurements or prior knowledge
• Odometry, loop closure constraints
90
Exercise
91
• Ubuntu Linux 16.04
• ROS Kinetic
• wget https://raw.githubusercontent.com/ROBOTIS-GIT/robotis_tools/master/install_ros_
kinetic.sh && chmod 755 ./install_ros_kinetic.sh && bash ./install_ros_kinetic.sh
• Docker
• https://docs.docker.com/install/linux/docker-ce/ubuntu/
• Download files for exercise
• https://www.dropbox.com/sh/tpjh6syh1fx3wxv/AAD2FmTaTufVADv8uGVCU4gAa?dl=0
• all-in-one2.tar (약 15GB)
• Docker load
• sudo docker load < all-in-one.tar
• Docker run
• sudo docker run -it --rm -p 6080:80 all-in-one
• 시간이 다소 소요
• 웹브라우저에서 127.0.0.1:6080을 입력
Requirements
92
• cd ~/Desktop/ORB_SLAM2/
• ./Examples/Monocular/mono_tum Vocabulary/ORBvoc.txt Examples/Monocular/TU
M3.yaml ./Dataset/rgbd_dataset_freiburg3_teddy/
• ./Examples/RGB-D/rgbd_tum Vocabulary/ORBvoc.txt Examples/RGB-D/TUM3.yaml .
/Dataset/rgbd_dataset_freiburg3_teddy/ ./Dataset/rgbd_dataset_freiburg3_teddy/a
ssociations.txt
ORB SLAM
93
LOAM
94
• cd ~/Desktop/LOAM
• source ~/catkin_ws/devel/setup.bash
• roslaunch loam_velodyne loam_velodyne.launch
• rosbag play --clock nsh_indoor_outdoor.bag
• cd ~/Desktop/InfiniTAM/InfiniTAM/build/Apps/InfiniTAM
• ./InfiniTAM Teddy/calib.txt Teddy/Frames/%04i.ppm Teddy/Frames/%04i.pgm
InfiniTam
95
SegMap
96
• Video: https://youtu.be/NOWdWDGeQ2o
• Install
• sudo pip3 install open3d-python
• Download examples
• git clone https://github.com/IntelVCL/Open3D.git
• Depth to Point cloud
• cd ./examples/Python/Basic && python3 rgbd_tum.py
• Vertex normal estimation
• cd ./examples/Python/Basic && python3 pointcloud.py
• Point to plane ICP
• cd ./examples/Python/Basic && python3 icp_registration.py
• cd ./examples/Python/Advanced && python3 non_blocking_visualization.py
• TSDF volume integration
• cd ./examples/Python/Advanced && python3 rgbd_integration.py
KinectFusion
97
• On the host
• Install
• sudo apt-get install ros-kinetic-rtabmap-ros
• Single-session mapping
• roslaunch rtabmap_ros demo_robot_mapping.launch
• rosbag play --clock demo_mapping.bag
• Demo on Google Tango device
RTAB-MAP
98
Thank you
99

More Related Content

What's hot

입문 Visual SLAM 14강 - 2장 Introduction to slam
입문 Visual SLAM 14강  - 2장 Introduction to slam입문 Visual SLAM 14강  - 2장 Introduction to slam
입문 Visual SLAM 14강 - 2장 Introduction to slamjdo
 
FastCampus 2018 SLAM Workshop
FastCampus 2018 SLAM WorkshopFastCampus 2018 SLAM Workshop
FastCampus 2018 SLAM WorkshopDong-Won Shin
 
LiDAR-based Autonomous Driving III (by Deep Learning)
LiDAR-based Autonomous Driving III (by Deep Learning)LiDAR-based Autonomous Driving III (by Deep Learning)
LiDAR-based Autonomous Driving III (by Deep Learning)Yu Huang
 
[PR12] You Only Look Once (YOLO): Unified Real-Time Object Detection
[PR12] You Only Look Once (YOLO): Unified Real-Time Object Detection[PR12] You Only Look Once (YOLO): Unified Real-Time Object Detection
[PR12] You Only Look Once (YOLO): Unified Real-Time Object DetectionTaegyun Jeon
 
Lecture 10 mobile robot design
Lecture 10 mobile robot designLecture 10 mobile robot design
Lecture 10 mobile robot designVajira Thambawita
 
Lidar for Autonomous Driving II (via Deep Learning)
Lidar for Autonomous Driving II (via Deep Learning)Lidar for Autonomous Driving II (via Deep Learning)
Lidar for Autonomous Driving II (via Deep Learning)Yu Huang
 
Multi Robot Swarm Systems
Multi Robot Swarm SystemsMulti Robot Swarm Systems
Multi Robot Swarm Systemsrm93
 
Faster R-CNN: Towards real-time object detection with region proposal network...
Faster R-CNN: Towards real-time object detection with region proposal network...Faster R-CNN: Towards real-time object detection with region proposal network...
Faster R-CNN: Towards real-time object detection with region proposal network...Universitat Politècnica de Catalunya
 
Lecture 12 localization and navigation
Lecture 12 localization and navigationLecture 12 localization and navigation
Lecture 12 localization and navigationVajira Thambawita
 
Multi sensor calibration by deep learning
Multi sensor calibration by deep learningMulti sensor calibration by deep learning
Multi sensor calibration by deep learningYu Huang
 
Introduction to Mobile Robotics
Introduction to Mobile RoboticsIntroduction to Mobile Robotics
Introduction to Mobile RoboticsRobots Alive India
 
確率ロボティクス第二回
確率ロボティクス第二回確率ロボティクス第二回
確率ロボティクス第二回Ryuichi Ueda
 
fusion of Camera and lidar for autonomous driving II
fusion of Camera and lidar for autonomous driving IIfusion of Camera and lidar for autonomous driving II
fusion of Camera and lidar for autonomous driving IIYu Huang
 
SSII2019企画: 画像および LiDAR を用いた自動走行に関する動向
SSII2019企画: 画像および LiDAR を用いた自動走行に関する動向SSII2019企画: 画像および LiDAR を用いた自動走行に関する動向
SSII2019企画: 画像および LiDAR を用いた自動走行に関する動向SSII
 
Deep learning based object detection basics
Deep learning based object detection basicsDeep learning based object detection basics
Deep learning based object detection basicsBrodmann17
 

What's hot (20)

입문 Visual SLAM 14강 - 2장 Introduction to slam
입문 Visual SLAM 14강  - 2장 Introduction to slam입문 Visual SLAM 14강  - 2장 Introduction to slam
입문 Visual SLAM 14강 - 2장 Introduction to slam
 
FastCampus 2018 SLAM Workshop
FastCampus 2018 SLAM WorkshopFastCampus 2018 SLAM Workshop
FastCampus 2018 SLAM Workshop
 
=SLAM ppt.pdf
=SLAM ppt.pdf=SLAM ppt.pdf
=SLAM ppt.pdf
 
LiDAR-based Autonomous Driving III (by Deep Learning)
LiDAR-based Autonomous Driving III (by Deep Learning)LiDAR-based Autonomous Driving III (by Deep Learning)
LiDAR-based Autonomous Driving III (by Deep Learning)
 
[PR12] You Only Look Once (YOLO): Unified Real-Time Object Detection
[PR12] You Only Look Once (YOLO): Unified Real-Time Object Detection[PR12] You Only Look Once (YOLO): Unified Real-Time Object Detection
[PR12] You Only Look Once (YOLO): Unified Real-Time Object Detection
 
Lecture 10 mobile robot design
Lecture 10 mobile robot designLecture 10 mobile robot design
Lecture 10 mobile robot design
 
Moving object detection
Moving object detectionMoving object detection
Moving object detection
 
Lidar for Autonomous Driving II (via Deep Learning)
Lidar for Autonomous Driving II (via Deep Learning)Lidar for Autonomous Driving II (via Deep Learning)
Lidar for Autonomous Driving II (via Deep Learning)
 
Multi Robot Swarm Systems
Multi Robot Swarm SystemsMulti Robot Swarm Systems
Multi Robot Swarm Systems
 
Faster R-CNN: Towards real-time object detection with region proposal network...
Faster R-CNN: Towards real-time object detection with region proposal network...Faster R-CNN: Towards real-time object detection with region proposal network...
Faster R-CNN: Towards real-time object detection with region proposal network...
 
Lecture 12 localization and navigation
Lecture 12 localization and navigationLecture 12 localization and navigation
Lecture 12 localization and navigation
 
Multi sensor calibration by deep learning
Multi sensor calibration by deep learningMulti sensor calibration by deep learning
Multi sensor calibration by deep learning
 
Swarm robotics ppt
Swarm robotics pptSwarm robotics ppt
Swarm robotics ppt
 
Introduction to Mobile Robotics
Introduction to Mobile RoboticsIntroduction to Mobile Robotics
Introduction to Mobile Robotics
 
確率ロボティクス第二回
確率ロボティクス第二回確率ロボティクス第二回
確率ロボティクス第二回
 
fusion of Camera and lidar for autonomous driving II
fusion of Camera and lidar for autonomous driving IIfusion of Camera and lidar for autonomous driving II
fusion of Camera and lidar for autonomous driving II
 
SSII2019企画: 画像および LiDAR を用いた自動走行に関する動向
SSII2019企画: 画像および LiDAR を用いた自動走行に関する動向SSII2019企画: 画像および LiDAR を用いた自動走行に関する動向
SSII2019企画: 画像および LiDAR を用いた自動走行に関する動向
 
Aerial Robotics
Aerial RoboticsAerial Robotics
Aerial Robotics
 
swarm robotics
swarm roboticsswarm robotics
swarm robotics
 
Deep learning based object detection basics
Deep learning based object detection basicsDeep learning based object detection basics
Deep learning based object detection basics
 

Similar to Introductory Level of SLAM Seminar

Elevation mapping using stereo vision enabled heterogeneous multi-agent robot...
Elevation mapping using stereo vision enabled heterogeneous multi-agent robot...Elevation mapping using stereo vision enabled heterogeneous multi-agent robot...
Elevation mapping using stereo vision enabled heterogeneous multi-agent robot...Aritra Sarkar
 
Using Deep Learning to Derive 3D Cities from Satellite Imagery
Using Deep Learning to Derive 3D Cities from Satellite ImageryUsing Deep Learning to Derive 3D Cities from Satellite Imagery
Using Deep Learning to Derive 3D Cities from Satellite ImageryAstraea, Inc.
 
2022 COMP4010 Lecture4: AR Interaction
2022 COMP4010 Lecture4: AR Interaction2022 COMP4010 Lecture4: AR Interaction
2022 COMP4010 Lecture4: AR InteractionMark Billinghurst
 
Multiple UGV SLAM Map Sharing
Multiple UGV SLAM Map SharingMultiple UGV SLAM Map Sharing
Multiple UGV SLAM Map SharingAkash Borate
 
Efficient architecture to condensate visual information driven by attention ...
Efficient architecture to condensate visual information driven by attention ...Efficient architecture to condensate visual information driven by attention ...
Efficient architecture to condensate visual information driven by attention ...Sara Granados Cabeza
 
SEU presentatie Greet deruyter
SEU presentatie Greet deruyterSEU presentatie Greet deruyter
SEU presentatie Greet deruyterUGent-amrp
 
Computer-Vision based Centralized Multi-agent System on Matlab and Arduino Du...
Computer-Vision based Centralized Multi-agent System on Matlab and Arduino Du...Computer-Vision based Centralized Multi-agent System on Matlab and Arduino Du...
Computer-Vision based Centralized Multi-agent System on Matlab and Arduino Du...Aritra Sarkar
 
Geoscience For Gis A
Geoscience For Gis AGeoscience For Gis A
Geoscience For Gis AAndrew Zolnai
 
PRM-RL: Long-range Robotics Navigation Tasks by Combining Reinforcement Learn...
PRM-RL: Long-range Robotics Navigation Tasks by Combining Reinforcement Learn...PRM-RL: Long-range Robotics Navigation Tasks by Combining Reinforcement Learn...
PRM-RL: Long-range Robotics Navigation Tasks by Combining Reinforcement Learn...Dongmin Lee
 
Zupt, LLC's SLAM and Optimal Sensor fusion
Zupt, LLC's SLAM and Optimal Sensor fusionZupt, LLC's SLAM and Optimal Sensor fusion
Zupt, LLC's SLAM and Optimal Sensor fusionRobert Flaming, PCM®
 
Analysis of KinectFusion
Analysis of KinectFusionAnalysis of KinectFusion
Analysis of KinectFusionDong-Won Shin
 
06 e-flightplanning
06 e-flightplanning06 e-flightplanning
06 e-flightplanningRehmat Kaur
 
Processing Large Graphs
Processing Large GraphsProcessing Large Graphs
Processing Large GraphsNishant Gandhi
 
Lecture 01 frank dellaert - 3 d reconstruction and mapping: a factor graph ...
Lecture 01   frank dellaert - 3 d reconstruction and mapping: a factor graph ...Lecture 01   frank dellaert - 3 d reconstruction and mapping: a factor graph ...
Lecture 01 frank dellaert - 3 d reconstruction and mapping: a factor graph ...mustafa sarac
 
Gis capabilities on Big Data Systems
Gis capabilities on Big Data SystemsGis capabilities on Big Data Systems
Gis capabilities on Big Data SystemsAhmad Jawwad
 
NetVLAD: CNN architecture for weakly supervised place recognition
NetVLAD:  CNN architecture for weakly supervised place recognitionNetVLAD:  CNN architecture for weakly supervised place recognition
NetVLAD: CNN architecture for weakly supervised place recognitionGeunhee Cho
 

Similar to Introductory Level of SLAM Seminar (20)

Elevation mapping using stereo vision enabled heterogeneous multi-agent robot...
Elevation mapping using stereo vision enabled heterogeneous multi-agent robot...Elevation mapping using stereo vision enabled heterogeneous multi-agent robot...
Elevation mapping using stereo vision enabled heterogeneous multi-agent robot...
 
Using Deep Learning to Derive 3D Cities from Satellite Imagery
Using Deep Learning to Derive 3D Cities from Satellite ImageryUsing Deep Learning to Derive 3D Cities from Satellite Imagery
Using Deep Learning to Derive 3D Cities from Satellite Imagery
 
2022 COMP4010 Lecture4: AR Interaction
2022 COMP4010 Lecture4: AR Interaction2022 COMP4010 Lecture4: AR Interaction
2022 COMP4010 Lecture4: AR Interaction
 
Multiple UGV SLAM Map Sharing
Multiple UGV SLAM Map SharingMultiple UGV SLAM Map Sharing
Multiple UGV SLAM Map Sharing
 
Efficient architecture to condensate visual information driven by attention ...
Efficient architecture to condensate visual information driven by attention ...Efficient architecture to condensate visual information driven by attention ...
Efficient architecture to condensate visual information driven by attention ...
 
SEU presentatie Greet deruyter
SEU presentatie Greet deruyterSEU presentatie Greet deruyter
SEU presentatie Greet deruyter
 
Computer-Vision based Centralized Multi-agent System on Matlab and Arduino Du...
Computer-Vision based Centralized Multi-agent System on Matlab and Arduino Du...Computer-Vision based Centralized Multi-agent System on Matlab and Arduino Du...
Computer-Vision based Centralized Multi-agent System on Matlab and Arduino Du...
 
Geoscience For Gis A
Geoscience For Gis AGeoscience For Gis A
Geoscience For Gis A
 
PRM-RL: Long-range Robotics Navigation Tasks by Combining Reinforcement Learn...
PRM-RL: Long-range Robotics Navigation Tasks by Combining Reinforcement Learn...PRM-RL: Long-range Robotics Navigation Tasks by Combining Reinforcement Learn...
PRM-RL: Long-range Robotics Navigation Tasks by Combining Reinforcement Learn...
 
TARDEC Presentation 2
TARDEC Presentation 2TARDEC Presentation 2
TARDEC Presentation 2
 
Zupt, LLC's SLAM and Optimal Sensor fusion
Zupt, LLC's SLAM and Optimal Sensor fusionZupt, LLC's SLAM and Optimal Sensor fusion
Zupt, LLC's SLAM and Optimal Sensor fusion
 
Analysis of KinectFusion
Analysis of KinectFusionAnalysis of KinectFusion
Analysis of KinectFusion
 
06 e-flightplanning
06 e-flightplanning06 e-flightplanning
06 e-flightplanning
 
Spark Technology Center IBM
Spark Technology Center IBMSpark Technology Center IBM
Spark Technology Center IBM
 
20210226 esa-science-coffee-v2.0
20210226 esa-science-coffee-v2.020210226 esa-science-coffee-v2.0
20210226 esa-science-coffee-v2.0
 
Processing Large Graphs
Processing Large GraphsProcessing Large Graphs
Processing Large Graphs
 
Lecture 01 frank dellaert - 3 d reconstruction and mapping: a factor graph ...
Lecture 01   frank dellaert - 3 d reconstruction and mapping: a factor graph ...Lecture 01   frank dellaert - 3 d reconstruction and mapping: a factor graph ...
Lecture 01 frank dellaert - 3 d reconstruction and mapping: a factor graph ...
 
AR/SLAM for end-users
AR/SLAM for end-usersAR/SLAM for end-users
AR/SLAM for end-users
 
Gis capabilities on Big Data Systems
Gis capabilities on Big Data SystemsGis capabilities on Big Data Systems
Gis capabilities on Big Data Systems
 
NetVLAD: CNN architecture for weakly supervised place recognition
NetVLAD:  CNN architecture for weakly supervised place recognitionNetVLAD:  CNN architecture for weakly supervised place recognition
NetVLAD: CNN architecture for weakly supervised place recognition
 

Recently uploaded

Nightside clouds and disequilibrium chemistry on the hot Jupiter WASP-43b
Nightside clouds and disequilibrium chemistry on the hot Jupiter WASP-43bNightside clouds and disequilibrium chemistry on the hot Jupiter WASP-43b
Nightside clouds and disequilibrium chemistry on the hot Jupiter WASP-43bSérgio Sacani
 
Proteomics: types, protein profiling steps etc.
Proteomics: types, protein profiling steps etc.Proteomics: types, protein profiling steps etc.
Proteomics: types, protein profiling steps etc.Silpa
 
❤Jammu Kashmir Call Girls 8617697112 Personal Whatsapp Number 💦✅.
❤Jammu Kashmir Call Girls 8617697112 Personal Whatsapp Number 💦✅.❤Jammu Kashmir Call Girls 8617697112 Personal Whatsapp Number 💦✅.
❤Jammu Kashmir Call Girls 8617697112 Personal Whatsapp Number 💦✅.Nitya salvi
 
SAMASTIPUR CALL GIRL 7857803690 LOW PRICE ESCORT SERVICE
SAMASTIPUR CALL GIRL 7857803690  LOW PRICE  ESCORT SERVICESAMASTIPUR CALL GIRL 7857803690  LOW PRICE  ESCORT SERVICE
SAMASTIPUR CALL GIRL 7857803690 LOW PRICE ESCORT SERVICEayushi9330
 
Presentation Vikram Lander by Vedansh Gupta.pptx
Presentation Vikram Lander by Vedansh Gupta.pptxPresentation Vikram Lander by Vedansh Gupta.pptx
Presentation Vikram Lander by Vedansh Gupta.pptxgindu3009
 
Call Girls Alandi Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Alandi Call Me 7737669865 Budget Friendly No Advance BookingCall Girls Alandi Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Alandi Call Me 7737669865 Budget Friendly No Advance Bookingroncy bisnoi
 
SCIENCE-4-QUARTER4-WEEK-4-PPT-1 (1).pptx
SCIENCE-4-QUARTER4-WEEK-4-PPT-1 (1).pptxSCIENCE-4-QUARTER4-WEEK-4-PPT-1 (1).pptx
SCIENCE-4-QUARTER4-WEEK-4-PPT-1 (1).pptxRizalinePalanog2
 
Pulmonary drug delivery system M.pharm -2nd sem P'ceutics
Pulmonary drug delivery system M.pharm -2nd sem P'ceuticsPulmonary drug delivery system M.pharm -2nd sem P'ceutics
Pulmonary drug delivery system M.pharm -2nd sem P'ceuticssakshisoni2385
 
Botany 4th semester series (krishna).pdf
Botany 4th semester series (krishna).pdfBotany 4th semester series (krishna).pdf
Botany 4th semester series (krishna).pdfSumit Kumar yadav
 
Forensic Biology & Its biological significance.pdf
Forensic Biology & Its biological significance.pdfForensic Biology & Its biological significance.pdf
Forensic Biology & Its biological significance.pdfrohankumarsinghrore1
 
Chemical Tests; flame test, positive and negative ions test Edexcel Internati...
Chemical Tests; flame test, positive and negative ions test Edexcel Internati...Chemical Tests; flame test, positive and negative ions test Edexcel Internati...
Chemical Tests; flame test, positive and negative ions test Edexcel Internati...ssuser79fe74
 
Hire 💕 9907093804 Hooghly Call Girls Service Call Girls Agency
Hire 💕 9907093804 Hooghly Call Girls Service Call Girls AgencyHire 💕 9907093804 Hooghly Call Girls Service Call Girls Agency
Hire 💕 9907093804 Hooghly Call Girls Service Call Girls AgencySheetal Arora
 
Feature-aligned N-BEATS with Sinkhorn divergence (ICLR '24)
Feature-aligned N-BEATS with Sinkhorn divergence (ICLR '24)Feature-aligned N-BEATS with Sinkhorn divergence (ICLR '24)
Feature-aligned N-BEATS with Sinkhorn divergence (ICLR '24)Joonhun Lee
 
High Profile 🔝 8250077686 📞 Call Girls Service in GTB Nagar🍑
High Profile 🔝 8250077686 📞 Call Girls Service in GTB Nagar🍑High Profile 🔝 8250077686 📞 Call Girls Service in GTB Nagar🍑
High Profile 🔝 8250077686 📞 Call Girls Service in GTB Nagar🍑Damini Dixit
 
Labelling Requirements and Label Claims for Dietary Supplements and Recommend...
Labelling Requirements and Label Claims for Dietary Supplements and Recommend...Labelling Requirements and Label Claims for Dietary Supplements and Recommend...
Labelling Requirements and Label Claims for Dietary Supplements and Recommend...Lokesh Kothari
 
GBSN - Microbiology (Unit 3)
GBSN - Microbiology (Unit 3)GBSN - Microbiology (Unit 3)
GBSN - Microbiology (Unit 3)Areesha Ahmad
 
Discovery of an Accretion Streamer and a Slow Wide-angle Outflow around FUOri...
Discovery of an Accretion Streamer and a Slow Wide-angle Outflow around FUOri...Discovery of an Accretion Streamer and a Slow Wide-angle Outflow around FUOri...
Discovery of an Accretion Streamer and a Slow Wide-angle Outflow around FUOri...Sérgio Sacani
 
Pests of cotton_Sucking_Pests_Dr.UPR.pdf
Pests of cotton_Sucking_Pests_Dr.UPR.pdfPests of cotton_Sucking_Pests_Dr.UPR.pdf
Pests of cotton_Sucking_Pests_Dr.UPR.pdfPirithiRaju
 
Justdial Call Girls In Indirapuram, Ghaziabad, 8800357707 Escorts Service
Justdial Call Girls In Indirapuram, Ghaziabad, 8800357707 Escorts ServiceJustdial Call Girls In Indirapuram, Ghaziabad, 8800357707 Escorts Service
Justdial Call Girls In Indirapuram, Ghaziabad, 8800357707 Escorts Servicemonikaservice1
 

Recently uploaded (20)

Nightside clouds and disequilibrium chemistry on the hot Jupiter WASP-43b
Nightside clouds and disequilibrium chemistry on the hot Jupiter WASP-43bNightside clouds and disequilibrium chemistry on the hot Jupiter WASP-43b
Nightside clouds and disequilibrium chemistry on the hot Jupiter WASP-43b
 
Proteomics: types, protein profiling steps etc.
Proteomics: types, protein profiling steps etc.Proteomics: types, protein profiling steps etc.
Proteomics: types, protein profiling steps etc.
 
❤Jammu Kashmir Call Girls 8617697112 Personal Whatsapp Number 💦✅.
❤Jammu Kashmir Call Girls 8617697112 Personal Whatsapp Number 💦✅.❤Jammu Kashmir Call Girls 8617697112 Personal Whatsapp Number 💦✅.
❤Jammu Kashmir Call Girls 8617697112 Personal Whatsapp Number 💦✅.
 
SAMASTIPUR CALL GIRL 7857803690 LOW PRICE ESCORT SERVICE
SAMASTIPUR CALL GIRL 7857803690  LOW PRICE  ESCORT SERVICESAMASTIPUR CALL GIRL 7857803690  LOW PRICE  ESCORT SERVICE
SAMASTIPUR CALL GIRL 7857803690 LOW PRICE ESCORT SERVICE
 
Presentation Vikram Lander by Vedansh Gupta.pptx
Presentation Vikram Lander by Vedansh Gupta.pptxPresentation Vikram Lander by Vedansh Gupta.pptx
Presentation Vikram Lander by Vedansh Gupta.pptx
 
Call Girls Alandi Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Alandi Call Me 7737669865 Budget Friendly No Advance BookingCall Girls Alandi Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Alandi Call Me 7737669865 Budget Friendly No Advance Booking
 
SCIENCE-4-QUARTER4-WEEK-4-PPT-1 (1).pptx
SCIENCE-4-QUARTER4-WEEK-4-PPT-1 (1).pptxSCIENCE-4-QUARTER4-WEEK-4-PPT-1 (1).pptx
SCIENCE-4-QUARTER4-WEEK-4-PPT-1 (1).pptx
 
Pulmonary drug delivery system M.pharm -2nd sem P'ceutics
Pulmonary drug delivery system M.pharm -2nd sem P'ceuticsPulmonary drug delivery system M.pharm -2nd sem P'ceutics
Pulmonary drug delivery system M.pharm -2nd sem P'ceutics
 
Clean In Place(CIP).pptx .
Clean In Place(CIP).pptx                 .Clean In Place(CIP).pptx                 .
Clean In Place(CIP).pptx .
 
Botany 4th semester series (krishna).pdf
Botany 4th semester series (krishna).pdfBotany 4th semester series (krishna).pdf
Botany 4th semester series (krishna).pdf
 
Forensic Biology & Its biological significance.pdf
Forensic Biology & Its biological significance.pdfForensic Biology & Its biological significance.pdf
Forensic Biology & Its biological significance.pdf
 
Chemical Tests; flame test, positive and negative ions test Edexcel Internati...
Chemical Tests; flame test, positive and negative ions test Edexcel Internati...Chemical Tests; flame test, positive and negative ions test Edexcel Internati...
Chemical Tests; flame test, positive and negative ions test Edexcel Internati...
 
Hire 💕 9907093804 Hooghly Call Girls Service Call Girls Agency
Hire 💕 9907093804 Hooghly Call Girls Service Call Girls AgencyHire 💕 9907093804 Hooghly Call Girls Service Call Girls Agency
Hire 💕 9907093804 Hooghly Call Girls Service Call Girls Agency
 
Feature-aligned N-BEATS with Sinkhorn divergence (ICLR '24)
Feature-aligned N-BEATS with Sinkhorn divergence (ICLR '24)Feature-aligned N-BEATS with Sinkhorn divergence (ICLR '24)
Feature-aligned N-BEATS with Sinkhorn divergence (ICLR '24)
 
High Profile 🔝 8250077686 📞 Call Girls Service in GTB Nagar🍑
High Profile 🔝 8250077686 📞 Call Girls Service in GTB Nagar🍑High Profile 🔝 8250077686 📞 Call Girls Service in GTB Nagar🍑
High Profile 🔝 8250077686 📞 Call Girls Service in GTB Nagar🍑
 
Labelling Requirements and Label Claims for Dietary Supplements and Recommend...
Labelling Requirements and Label Claims for Dietary Supplements and Recommend...Labelling Requirements and Label Claims for Dietary Supplements and Recommend...
Labelling Requirements and Label Claims for Dietary Supplements and Recommend...
 
GBSN - Microbiology (Unit 3)
GBSN - Microbiology (Unit 3)GBSN - Microbiology (Unit 3)
GBSN - Microbiology (Unit 3)
 
Discovery of an Accretion Streamer and a Slow Wide-angle Outflow around FUOri...
Discovery of an Accretion Streamer and a Slow Wide-angle Outflow around FUOri...Discovery of an Accretion Streamer and a Slow Wide-angle Outflow around FUOri...
Discovery of an Accretion Streamer and a Slow Wide-angle Outflow around FUOri...
 
Pests of cotton_Sucking_Pests_Dr.UPR.pdf
Pests of cotton_Sucking_Pests_Dr.UPR.pdfPests of cotton_Sucking_Pests_Dr.UPR.pdf
Pests of cotton_Sucking_Pests_Dr.UPR.pdf
 
Justdial Call Girls In Indirapuram, Ghaziabad, 8800357707 Escorts Service
Justdial Call Girls In Indirapuram, Ghaziabad, 8800357707 Escorts ServiceJustdial Call Girls In Indirapuram, Ghaziabad, 8800357707 Escorts Service
Justdial Call Girls In Indirapuram, Ghaziabad, 8800357707 Escorts Service
 

Introductory Level of SLAM Seminar

  • 1. Introductory Level SLAM Seminar Jan. 18, 2018 Dong-Won Shin
  • 2. • Dong-Won Shin • Gwangju Institute of Science and Technology • PhD candidate in Computer Science Field • SLAM Research Group KR manager • (blog) dongwonshin.net • (github) github.com/JustWon • (E-mail) dongwonshin@gist.ac.kr Speaker 2
  • 3. • Introduction to SLAM • Traditional SLAM approaches • Recent SLAM approaches • Exercise Contents 3
  • 5. • What is SLAM? • Computational problem of constructing a map of an environment while simultaneously keeping track of a robot’s location • Application Simultaneous Localization and Mapping Augmented reality Virtual reality Robotics https://en.wikipedia.org/wiki/Simultaneous_localization_and_mapping 5
  • 6. • Visual localization • Under the inaccurate GPS • GPS-denied environment • Ex) • Mapping • Scenarios in which a prior map is not available and needs to be built. • Map can inform path planning or provide an intuitive visualization for a human or robot. • Ex) Simultaneous Localization and Mapping 6 C. Cadena et al., “Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age,” IEEE Trans. Robot., vol. 32, no. 6, pp. 1309–1332, 2016. Indoor environment Skyscraper Disaster areaPrivate room
  • 7. • Bayesian Filtering based SLAM • Theoretical fundamental and prototype of traditional Bayesian filtering based SLAM framework emerged in 1900s. • Ex) EKF SLAM, FastSLAM • Visual odometry • The process of estimating the ego-motion of a robot using only the input of a single or multiple cameras attached to it. • Ex) stereo VO, monocular VO • Structure from motion • Investigating the problem of recovering relative camera poses and 3D structure from a set of camera images • Off-line version of visual SLAM Earlier Inspirations 7
  • 8. • The solution to the large scale map management • 1) graph based slam and loop closure detection • 2) efficient map representation and refinement: sparse, dense, and semi-dense • Graph based SLAM • constructing a graph whose nodes represent robot poses or landmarks • edge between nodes encodes a sensor measurement that constrains connected poses • Loop closure detection • Detecting loop closures in a map to give additional constraint for the consistent mapping Effort towards Large Scale Mapping 8
  • 9. • Sensor input • Visual odometry • Backend optimization • Loop closure detection • Mapping 전통적인 Visual SLAM 프레임워크 9 Sensor input Visual odometry Backend optimization Mapping Loop closing
  • 10. • Frontend • 인접한 이미지 사이의 카메라 움직임 • 카메라의 동작에 대한 정량적인 측정이 필요 • 회전 및 이동 • 카메라와 3차원 공간 점 사이의 기하학적 관계를 이해 • Drift error • Visual odometry만을 이용하여 궤적을 추정하면 에러가 누적됨 • 해결책 • Backend optimization과 loop closure detection의 필요 Visual Odometry 10
  • 11. • Sensor noise • 정밀한 센서라도 노이즈는 있기 마련 • 저렴한 센서는 측정 오류가 큼 • 일부 센서는 자기장과 온도에 영향을 받음 • Backend 최적화 • 노이즈가 많은 데이터로부터 전체 시스템의 상태를 추정하는 방법 (state estimation) • Frontend에서 발생한 Drift 에러를 제거 • Frontend에서는 backend에 최적화 할 데이터와 데이터의 초기 값을 제공 • Backend 최적화의 종류 • 필터 기반 (Kalman filter, particle filter) • 비선형 최적화 기반 (bundle adjustment, pose graph optimization) Backend Optimization 11
  • 12. • 현재 위치가 이전에 방문한 곳인지를 판단 • QR 코드 활용 • 환경적인 제약이 존재 • 카메라 영상을 활용 • 이미지 간의 유사성을 판단 • Backend에서는 루프 폐쇄에 대한 정보를 받아서 위치 추정과 지도 작성에 반영 • Drift 에러를 제거하고 전역적으로 일관된 맵을 생성 Loop Closure Detection 12 T. Sattler et al., “Benchmarking 6DOF Outdoor Visual Localization in Changing Conditions,” 2017.
  • 13. • 환경 맵을 만드는 과정 • 응용 프로그램에 따라 다르게 구현 • 무인 청소 로봇 • 2차원 지도로 충분 • 자율주행 드론 • 6DOF이므로 3차원 지도가 필요 • Map representation • Pointcloud • Voxel • Surfel • Occupancy grid map • … • 맵 표현 밀도에 따른 종류 • Sparse, dense, semi-dense map Mapping 13
  • 14. • Sparse SLAM • Only use a small selected subset of the pixels (features) from a monocular color camera • Fast and real time on CPU but it produces a sparse map (point clouds) • Landmark-based or feature-based representations • ORB SLAM • Complete SLAM system for monocular camera • Real-time on standard CPUs in a wide variety of environments • small hand-held indoors • drones flying in industrial environments • cars driving around a city Modern State of the Art Systems 14
  • 15. • Dense SLAM • Use most or all of the pixels in each received frame • Or use depth images from a depth camera • It produces a dense map but GPU acceleration is necessary for the real-time operation. • Volumetric model or surfel-based representations • KinectFusion • Modelling of natural scenes with only commodity sensor and GPU • Real-time markerless tracking • Dense surface reconstruction Modern State of the Art Systems 15
  • 16. • Lidar SLAM • Make use of the Lidar sensor input for the localization and mapping • Autonomous driving purpose-oriented in outdoor environment • LOAM • Very low drift error using the edge and planar features • Low computation complexity Modern State of the Art Systems 16
  • 17. Investments in SLAM Field 17 SLAMcore Scape Technology https://www.mmcventures.com/mmc-ventures-backs-slamcore-in-5-million-seed-funding-2/ https://techcrunch.com/2019/01/08/scape/
  • 19. ORB SLAM 19 • Video: https://www.youtube.com/watch?v=ufvPS5wJAx0
  • 20. • Feature-based monocular SLAM system • operates in real time, in small and large, indoor and outdoor environments • URL: https://github.com/raulmur/ORB_SLAM2 • Main components • Tracking • Local mapping • Loop closing • Data structures • Map points • Keyframes • Covisibility graph • Essential graph ORB SLAM 20 Map points & keyframes Covisibility graph Essential graph R. Mur-Artal, J. M. M. Montiel, and J. D. Tardos, “ORB-SLAM: A Versatile and Accurate Monocular SLAM System,” IEEE Trans. Robot., vol. 31, no. 5, pp. 1147–1 163, 2015.
  • 21. • Map points 𝑝" • 3D position 𝐗$," • Viewing direction 𝐧" • Representative ORB descriptor 𝐃" • Keyframes 𝐾" • Camera pose 𝑻"$ • Camera intrinsics (focal length and principal point) • All the ORB features extracted in the frame Data structures 21 Map points & keyframes
  • 22. • Covisibility graph • Undirected weighted graph • Node: keyframe • Edge: if two keyframes share observations of the same map points (at least 15) • Weight 𝜃: the number of common map points • Essential graph • Retain all the nodes but less edges • Subset of edges from the covisibility graph with high covisibility + loop closure edges Data structures 22 Covisibility graph Essential graph
  • 23. • ORB feature extraction • For tracking, mapping, and place recognition tasks • Robust to rotation and scale • Good invariance to camera auto-gain and auto-exposure, and illumination changes • Fast to extract and match allowing for real-time operation • Show good precision/recall performances in bag-of-word place recognition Tracking 23 E. Rublee, V. Rabaud, K. Konolige, and G. R. Bradski, “ORB: An efficient alternative to SIFT or SURF.,” ICCV, pp. 2564–2571, Jan. 2011. Example of orb features
  • 24. • Initial pose estimation • Case 1: if tracking was successful for the last frame • constant velocity motion model • Bundle adjustment in two view case Tracking 24 Minimizing the reprojection error https://cs.nyu.edu/~fergus/teaching/vision/11_12_multiview.pdf t1 t2 t3
  • 25. • Initial pose estimation • Case 2: if the tracking is lost • global relocalization • Bag-of-visual-words • Converting an image into a representative descriptor • Perspective-N-Point problem • Finding 2D—3D correspondences Tracking 25
  • 26. • New keyframe decision • (condition 1) Good relocalization • more than 20 frames must have passed from the last global relocalization • (condition 2) Last keyframe insertion • more than 20 frames have passed from the last keyframe insertion • (condition 3) Good tracking • Current frame tracks at least 50 points Tracking 26 Global Relocalization New Keyframe... … … 20 frames Last Keyframe New Keyframe... … … 20 frames Last frame New Keyframe … … At least 50 points
  • 27. • Keyframe insertion • Compute the bags of words representation • Update the covisibility graph • New map point creation • Finding the matches by the epipolar geometry • Initial map point creation via triangulation • Consistency check • Parallax • Positive depth in both cameras • Reprojection error Local Mapping 27 Feature vectorKeyframe Covisibility graph
  • 28. • Local keyframe culling • To maintain a compact reconstruction, detect redundant keyframes and delete them • Redundency check • Keyframe whose 90% of the map points have been seen in at least other three keyframes • Recent map point culling • To retain the compact map • Association check • • Local bundle adjustment • Local map points • K" : Currently processed keyframe • K- : All the keyframes connected to it in the covisibility graph • All the map points seen by those keyframes Local Mapping 28 (# frames the matching is found) (# frames the map point is visible) > 0.25 matching found matching not found
  • 29. • Loop candidates detection • Compute the similarity between the bag of words vector • Current keyframe and all its neighbors in the covisibility graph • Query the recognition DB • Compute the similarity transformation • 7 DOF (3 rotations, 3 translations, scale) • Scale drift over time in a monocular SLAM • Similarity transform between current keyframe and the loop closure candidate Loop Closing 29 keyframe1 keyframe2 Keyframe N …… Keyframe DB current keyframe query https://www.youtube.com/watch?v=v9lGUGRehkM
  • 30. • Loop fusion • When the loop closure detection is triggered, the loop fusion is performed. • Keyframe and map point correction • Insert loop edges in the covisibility graph • Current keyframe pose is corrected with the similarity transformation that we obtained • Fuse duplicated map points • Essential graph optimization • For all keyframes and all map points Loop Closing 30 Essential graph
  • 33. KinectFusion • Dense RGB-D SLAM system • Only the depth data from Kinect is used • to track the 3D pose of the sensor • to reconstruct 3D models of the physical scene in real-time • GPU-based pipeline • Application • Core system for low-cost handheld scanning • Geometry-aware augmented reality • Physics-based interactions • Real-time multi-touch interactions anywhere 33
  • 34. • Depth to vertex • Transform each pixel of a new depth image into a 3D point (vertex) • 𝑣" 𝑢 = 𝐷" 𝑢 𝐾78 [𝑢, 1] • Vertex to normal • Compute the normal vectors for each vertex of a point cloud • Normals are used in the projective data association step of the ICP algorithm • 𝑛" 𝑢 = (𝑣" 𝑥 + 1, 𝑦 − 𝑣" 𝑥, 𝑦 )×(𝑣" 𝑥, 𝑦 + 1 − 𝑣" 𝑥, 𝑦 ) Depth Map Conversion 34
  • 35. Camera Tracking 35 • Iterative Closest Points (ICP) • Widely used for geometric alignment of three-dimensional models • Start with two meshes and an initial guess for their relative rigid-body transform • Refine the transform by repeatedly generating pairs of corresponding points • Methods • Point to Point • Point to Plane • …
  • 36. • Minimize a perpendicular distance from the source point to tangent plane of destination point • Nonlinear least square algorithm using Levenberg-Marquardt method Point-to-Plane ICP 36 K. Low, “Linear Least-squares Optimization for Point-to-plane ICP Surface Registration,” Chapel Hill, Univ. North Carolina, no. February, pp. 2–4, 2004. 𝑠" = (𝑠"E, 𝑠"F, 𝑠"G, 1)H 𝑑" = (𝑑"E, 𝑑"F, 𝑑"G, 1)H 𝑛" = (𝑛"E, 𝑛"F, 𝑛"G, 0)H Source point s Destination point d Unit normal vector at d
  • 37. • Voxel • A value on a regular grid in 3D space, the extended concept of 2D pixel • Widely used for realistic rendering 3D object in computer graphics • Data • Truncated signed distance function (TSDF) value • Truncated signed distance function (TSDF) weight Volumetric Representation 37
  • 38. • Truncated signed distance function (TSDF) • Predefined 3D volume is subdivided uniformly into a 3D grid of voxels • These values are positive in-front of the surface and negative behind • Zero-crossing point means the surface of the object Volumetric Representation 38
  • 39. • TSDF integration • If a voxel is behind the surface observed in the new depth image, the image does not contain any new information about it, and the function returns. • If the voxel is close to or in front of the observed surface, a corresponding observation is added to the accumulated sum. Volumetric Integration 39 S. Izadi, D. Kim, O. Hilliges, and … D. M., “KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera,” in Proceedings of the 24th …, 2 011.
  • 40. • Motivation • Depth image is computed from the updated 3D world model given a camera pose • Input to the tracking step in the next frame and also for visualization. • Raycasting • A ray is being cast from the camera up until an intersection with the surface is found • Checking the value of the TSDF at each voxel along the ray until a zero-crossing is found Raycasting 40
  • 43. • Low-drift and real-time lidar odometry and mapping • Github: https://github.com/laboshinl/loam_velodyne • Main components • LiDAR odometry • Compute the motion of the Lidar between two consecutive sweeps at a frequency of 10Hz • LiDAR mapping • Matches and registers the local cloud onto a map at a frequency of 1Hz LOAM 43 J. Zhang and S. Singh, “Low-drift and real-time lidar odometry and mapping,” Auton. Robots, vol. 41, no. 2, pp. 401–416, 2017.
  • 44. • Feature point extraction • Edge features • Planar features • Curvature • Smoothness of the local surface • Sort the points based on the c values • Classification • Planar features : from the minimum c • Edge features: from the maximum c LiDAR Odometry 44 Sorted points Edge features Planar features maximum minimum
  • 45. • Outlier rejection • (a) Points on local planar surfaces that are roughly parallel to the laser beams • its neighborhood form a surface patch whose normal is within 10 degree to the laser beam • (b) Points that are on boundary of occluded regions • its neighborhood disconnected from the point by a gap in the direction of the laser beam LiDAR Odometry 45
  • 46. • Finding feature point correspondence • For edge features in the current point clouds, • Find edge lines in the previous point clouds • Point-to-edge distance • For planar features in the current point clouds, • Find planar patches in the previous point clouds • Point-to-plane distance LiDAR Odometry 46
  • 47. • Motion estimation • Compute residuals according to • point-to-edge distance • point-to-plane distance • Establish Jacobian matrix J • Levenberg-Marquardt method • Bisquare weight LiDAR Odometry 47 , where Least square
  • 48. • Pseudo code LiDAR Odometry 48 For edge feature For planar feature Optimization Feature classification
  • 49. • Mapping current local point cloud to the map • Store the point cloud on the map in 10 m cubic area at a current position • Extract feature points in the same way (edge and planar features) • Find correspondences • S’: neighboring points in a certain region around the feature points • Covariance matrix M • Eigendecomposition • Edge line: one eigenvalue is significantly larger than the other two • Planar patch: two large eigenvalues with the third one significantly smaller LiDAR Mapping 49 without Lidar mapping with Lidar mapping
  • 50. • Visual-lidar Odometry and Mapping: Low-drift, Robust, and Fast • Authors: Ji Zhang and Sanjiv Singh • Main components • Visual odometry • At high frame-rate (60hz) to estimate a motion • Lidar odometry • At low frequency (1hz) to refine motion estimate V-LOAM 50
  • 51. • Depth map registration • World map points are projected to the color camera view. • Rotation and translation of previous frame is necessary. • Only points in front of the camera are kept, point that a received a certain time ago are forgotten. • Projected depth map • Visual feature associated with the distance Visual Odometry 51 color image point cloud & visual features
  • 52. • Frame to Frame motion estimation • Known distance case • Unknown distance case • Levenberg-Marquardt method Visual Odometry 52 * In this case, only the depth map at (k-1) frame is always available.
  • 53. • Sweep to Sweep refinement • Extract geometric features like edge points and planar points • Compute curvature in local scans • Outlier rejection • Find correspondences • Motion estimation with the nonlinear optimization Lidar Odometry 53
  • 54. • Sweep to Map registration • Add undistorted point cloud to the map • Find correspondences • Examining distributions of local point clusters • Edge segment: one large and two small eigenvalues • Planar patch: two large and one small eigenvalues Lidar Odometry 54
  • 59. • maplab: An Open Framework for Research in Visual-inertial Mapping and Localization • Main contribution • General purpose visual-inertial mapping framework with multi-session support • Robust visual-inertial estimator tightly coupled with a localization system • Extensibility of the system that makes it well suited for research • Main components • ROVIOLI: VIO and localization frontend • Maplab-console MapLab 59
  • 60. • Robust visual-inertial odometry with localization integration • Mapping and localization frontend • Build maps from raw visual and inertial data • Localize with respect to existing maps online • Feature tracking module • Track BRISK or FREAK keypoints • Correspondences by matching descriptors from frame to frame • Frame localization • 2d-3d correspondences against the provided localization map • P3P algorithm within a RANSAC scheme • ROVIO • Map builder ROVIOLI 60
  • 61. • Console user interface to manipulate maps • Multiple maps can be loaded into the console simultaneously. • All algorithms are available through console commands. • Parameters specific to each algorithm are set by console flags • Real-time visualization of the map in Rviz • It is possible to combine multiple algorithms and experiment with entire pipelines. • Changes can be easily reverted by saving and reloading intermediate states MapLab Console 61 https://github.com/ethz-asl/maplab/wiki
  • 62. • VI-map: Visual-inertial mapping data • Raw measurements of all sensors • Sparse reconstruction of the covered environment • Mission (submap) • Vertex: state captured at a certain point in time • Edge: connects two neighboring vertices • IMU edge • Loop-closure edge Map Structure 62
  • 63. • For many robotic applications • Very important to have access to drift-free global pose estimates • Example • Robotic manipulation • precise navigation • Teach & repeat scenarios • ROVIOLI to create an initial VI-map • Sensor data from • Offline in a Rosbag • Online using ROS topics • VI-map is automatically • loop-closed • Optimized • Keyframed • Summarized Online Mapping 63
  • 64. • In many mapping applications, • It is not possible to cover the entire environment within a single mapping session • Maplab offers tools to co-register maps from multiple sessions together Multi-session Mapping 64
  • 65. • Large-scale, multi-session map of the old town of Zurich • Raw visual-inertial data recorded in 45 sessions on two different days • Initial open loop maps on Google Tango tablets with ROVIOLI • Global consistent map on a desktop with 32 GB RAM • Total duration: 231 min • Total length: 16.48 km • Map size: 480 MB Large-Scale Mapping 65
  • 66. • Dense 3d representation of the environment is required. • path planning • inspection and object detection • Collision, obstacle avoidance • Stereo dense reconstruction • semi-global stereo matching • TSDF-based depth fusion • Voxblox (volumetric mapping library) Dense Reconstruction 66
  • 67. • Most of the basic tools needed in visual-inertial mapping and localization • A rich set of helper functions, queries, and manipulation tools • Easy integration of new algorithms into the system • Extensive examples demonstrating how to extend the framework • Related researches Using MapLab for Research 67 Topomap: Topological Mapping and Navigation Based on Visual SLAM Map Collaborative Navigation for Flying and Walking Robots Real-time visual-inertial mapping, re-localization and planning onboard mavs in unknown environments Appearance-based landmark selection for efficient long-term visual localization Keep it brief: Scalable creation of compressed localization maps The gist of maps-summarizing experience for lifelong localization Real-time visual-inertial localization for aerial and ground robots Map API-scalable decentralized map building for robots https://github.com/ethz-asl/maplab/wiki/Related-Research
  • 69. • Real-time, large-scale depth fusion and tracking framework • URL: https://github.com/victorprad/InfiniTAM • Main components • Tracking • Allocation • Integration • Raycasting • Relocalisation & Loop Closure Detection • Data structure • Volumetric representation • Voxel block hashing InfiniTam 69 V. A. Prisacariu et al., “InfiniTAM v3: A Framework for Large-Scale 3D Reconstruction with Loop Closure,” 2017.
  • 70. • Volumetric representation using a hash lookup • Data structures and operations • Voxel • Voxel block hash • Hash table and hashing function • Hash table operations • Voxel • A value on a regular grid in 3D space, the extended concept of 2D pixel • Widely used for realistic rendering 3D object in computer graphics • Data • TSDF value • TSDF Weight • Color value • Color weight Volumetric Representation 70
  • 71. • Voxel block array • Majority of data stored in the regular voxel grid is marked either as • Free space • Unobserved space • Only store the surface data by efficient hashing scheme • Grouped voxels in blocks of predefined size (ex. 8x8x8) • Data • Positions of the corner of the 8x8x8 voxel block • Offset in the excess list • Pointer to the voxel block array Volumetric Representation 71 M. Nießner, M. Zollhöfer, S. Izadi, and M. Stamminger, “Real-time 3D reconstruction at scale using voxel hashing,” ACM Trans. Graph., vol. 32, no. 6, pp. 1–11, 2013.
  • 72. • Hash table • To quickly and efficiently find the position of a certain voxel block in the voxel block array • Hashing function • For locating entries of the hash table takes the corner coordinates of a 3D voxel block • Hash collision case • Use the additional unordered excess list • Store an offset in the voxel block array Volumetric Representation 72
  • 73. • To determine the pose of a new camera frame given the 3D world model • Diverse methods • Using only the depth • Inspired by Point-to-Plane ICP • Using only the color • Inspired by the direct method • Using both data • Utilize both approaches • Main differences of the extended tracker • Huber-norm instead of the standard L2 norm • Error term weighted by its depth measurement Tracking 73 Least square Huber
  • 74. • Three main stages in allocation • 1) backproject a line connecting 𝑑 − 𝜇 to 𝑑 + 𝜇 • 𝑑 : depth in image coordinates • 𝜇: a fixed, tunable parameter • This leads to a line in world coordinates, which intersects a number of voxel blocks. • Search the hash table for each of these blocks and look for a free hash entry • 2) allocate voxel blocks for each non zero entry in the allocation and visibility arrays • 3) build a list of live hash entries Allocation 74 d+μ d d-μ virtual voxel block grid
  • 75. • TSDF integration • If a voxel is behind the surface observed in the new depth image, the image does not contain any new information about it, and the function returns. • If the voxel is close to or in front of the observed surface, a corresponding observation is added to the accumulated sum. Integration 75 S. Izadi, D. Kim, O. Hilliges, and … D. M., “KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera,” in Proceedings of the 24th …, 2 011.
  • 76. • Motivation • Depth image is computed from the updated 3D world model given a camera pose • Input to the tracking step in the next frame and also for visualization. • Raycasting • A ray is being cast from the camera up until an intersection with the surface is found • Checking the value of the TSDF at each voxel along the ray until a zero-crossing is found • State machine to efficiently handle the sparse volumetric space • SEARCH_BLOCK_COARSE • SEARCH_BLOCK_FINE • SEARCH_SURFACE • BEHIND_SURFACE Raycasting 76
  • 77. • Keyframe-based random ferns relocaliser • To relocalise the camera when tracking fails • To detect loop closures when aiming to construct a globally-consistent scene • Procedure • Downsample and preprocess image • Each of m code blocks is obtained by applying a random fern to I • Fern is a set of n binary feature tests on the image, each yielding either 0 or 1 • 𝑏LM N ∈ 𝐵Q denote the n-bit binary code resulting from applying fern 𝐹S to I • 𝑏T N ∈ 𝐵UQ denote the result of concatenating all m such binary codes for I • Dissimilarity measure between two different images I and J as the block-wise Hamming distance between 𝑏T N and 𝑏T V : where is 0 if the two code blocks are identical, and 1 otherwise Relocalisation & Loop Closure Detection 77
  • 78. • Submap based approach • Division of the scene into multiple rigid submaps • Active submaps: tracked against at each frame • Passive submaps: maintained, but not tracked against unless they become active again Globally-Consistent Reconstruction 78 N. Fioraio, J. Taylor, A. Fitzgibbon, L. Di Stefano, and S. Izadi, “Large-scale and drift-free surface reconstruction using online subvolume registration,” 2015 IEEE Co nf. Comput. Vis. Pattern Recognit., pp. 4475–4483, Jan. 2015.
  • 79. • Graph representation • Numerical solution of the equation can be obtained by using popular Gauss-Newton Pose Graph Optimization 79 e8X eXY eYZ eZ[ e8 e[X measurement estimation G. Grisetti, R. Kummerle, C. Stachniss, and W. Burgard, “A tutorial on graph-based SLAM,” IEEE Intell. Transp. Syst. Mag., vol. 2, no. 4, pp. 31–43, 2010.
  • 82. • Incremental method for localization in 3D point clouds based on segment matching • URL: https://github.com/ethz-asl/segmap • Main components • Front-end • Sequential factors • Place recognition factors • Back-end • Pose graph optimization SegMap 82
  • 83. • The front-end is responsible for • Sensor measurements into sequential factors • Segmentation and description for place recognition factors • Sequential factors • Odometry factors • Displacement between consecutive robot poses from IMU data • Scan-matching factors • Registering the current scan against a local map by point-to-plane ICP • Place recognition factors • SegMatch: segment based loop-closure for 3D point clouds Front-End 83
  • 84. • Place recognition factors • SegMatch: segment based loop-closure for 3D point clouds • Four different modules • Point cloud segmentation • Descriptor extraction • Segment matching • Geometric verification Front-End 84
  • 85. • Point cloud segmentation • Incremental region growing policy • CanGrowTo • If growing from a seed to a neighbor is allowed • Check the angle between seed and candidate normals • LinkClusters • Link clusters if they have the same cluster id • CanBeSeed • If a point can be used as seed • Check the curvature at a point SegMatch 85 Region growing Cluster merging
  • 86. • Descriptor extraction • For compressing the raw segment data and build object signatures • Diverse segment descriptors • Eigenvalue based • Eigenvalues for the segment are computed and combined in a 7 dim feature vector • Linearity, planarity, scattering, omnivariance, anisotropy, eigenentropy, change of curvature SegMatch 86 M. Weinmann, B. Jutzi, and C. Mallet, “Semantic 3D scene interpretation: A framework combining optimal neighborhood size selection with relevant fea tures,” ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci., vol. II-3, no. September, pp. 181–188, 2014. Eigenvalue-based 3D features
  • 87. • Diverse segment descriptors • Auto-encoder based • Input: 3D binary voxel grid of fixed dimension 32x32x16 • Descriptor extractor part • 3D convolutional layers with max pool layers and two Fully connected layers • Rectified linear activation function for all layers • Output: 64x1 descriptor • Reconstruction part • One fully connected layer and three deconvolutional layers with a final sigmoid output • Output: reconstructed 3D binary voxel grid • Loss • Classification loss: softmax cross entropy loss • Reconstruction loss: binary cross entropy loss SegMatch 87 R. Dubé, A. Cramariuc, D. Dugas, J. Nieto, R. Siegwart, and C. Cadena, “SegMap: 3D Segment Mapping using Data-Driven Descriptors,” 2018.
  • 88. • Geometric verification • Consistency graph G=(V,E) • Vertex V = {𝑐"}, the set of correspondences 𝑐" • Edge E = {𝑒"_} the set of undirected edges 𝑒"_ connecting all consistent pairs of correspo ndences (𝑐", 𝑐_) • Geometrically consistent • If the difference of the Euclidean distance between the segment centroid in the local map and i n the target map is less than a threshold • Identifying a maximum geometrically consistent set == finding a maximum clique of G SegMatch 88 |𝑑a(𝑐", 𝑐_) − 𝑑b(𝑐", 𝑐_)| ≤ 𝜖
  • 89. • Illustration Consistency Graph 89 Local map Target map c1 c2 c3 c4 c5 c6 c7 c8 c9 c10 c11 c12 Local map Target map c1 c2 c3 c4 c5 c6 c7 c8 c9 c10 c11 c12 Local map Target map c1 c2 c3 c4 c5 c6 c7 c8 c9 c10 c11 c12 Local map Target map c1 c2 c3 c4 c5 c6 c7 c8 c9 c10 c11 c12 Local map Target map c2 c3 c5 c6 c7 c8 c9 c10 c11
  • 90. Back-End • Pose graph optimization by factor graphs (iSAM library) • Factor graphs • Graphical models that are well suited to modeling complex estimation problems • Variables • Unknown random variables in the estimation problem • Robot poses • Factors • Probabilistic information on those variables, derived from measurements or prior knowledge • Odometry, loop closure constraints 90
  • 92. • Ubuntu Linux 16.04 • ROS Kinetic • wget https://raw.githubusercontent.com/ROBOTIS-GIT/robotis_tools/master/install_ros_ kinetic.sh && chmod 755 ./install_ros_kinetic.sh && bash ./install_ros_kinetic.sh • Docker • https://docs.docker.com/install/linux/docker-ce/ubuntu/ • Download files for exercise • https://www.dropbox.com/sh/tpjh6syh1fx3wxv/AAD2FmTaTufVADv8uGVCU4gAa?dl=0 • all-in-one2.tar (약 15GB) • Docker load • sudo docker load < all-in-one.tar • Docker run • sudo docker run -it --rm -p 6080:80 all-in-one • 시간이 다소 소요 • 웹브라우저에서 127.0.0.1:6080을 입력 Requirements 92
  • 93. • cd ~/Desktop/ORB_SLAM2/ • ./Examples/Monocular/mono_tum Vocabulary/ORBvoc.txt Examples/Monocular/TU M3.yaml ./Dataset/rgbd_dataset_freiburg3_teddy/ • ./Examples/RGB-D/rgbd_tum Vocabulary/ORBvoc.txt Examples/RGB-D/TUM3.yaml . /Dataset/rgbd_dataset_freiburg3_teddy/ ./Dataset/rgbd_dataset_freiburg3_teddy/a ssociations.txt ORB SLAM 93
  • 94. LOAM 94 • cd ~/Desktop/LOAM • source ~/catkin_ws/devel/setup.bash • roslaunch loam_velodyne loam_velodyne.launch • rosbag play --clock nsh_indoor_outdoor.bag
  • 95. • cd ~/Desktop/InfiniTAM/InfiniTAM/build/Apps/InfiniTAM • ./InfiniTAM Teddy/calib.txt Teddy/Frames/%04i.ppm Teddy/Frames/%04i.pgm InfiniTam 95
  • 97. • Install • sudo pip3 install open3d-python • Download examples • git clone https://github.com/IntelVCL/Open3D.git • Depth to Point cloud • cd ./examples/Python/Basic && python3 rgbd_tum.py • Vertex normal estimation • cd ./examples/Python/Basic && python3 pointcloud.py • Point to plane ICP • cd ./examples/Python/Basic && python3 icp_registration.py • cd ./examples/Python/Advanced && python3 non_blocking_visualization.py • TSDF volume integration • cd ./examples/Python/Advanced && python3 rgbd_integration.py KinectFusion 97
  • 98. • On the host • Install • sudo apt-get install ros-kinetic-rtabmap-ros • Single-session mapping • roslaunch rtabmap_ros demo_robot_mapping.launch • rosbag play --clock demo_mapping.bag • Demo on Google Tango device RTAB-MAP 98