SlideShare a Scribd company logo
1 of 11
Download to read offline
DS-620 Data Visualization
Chapter 5. Summary
Valerii Klymchuk
June 26, 2015
0. EXERCISE 0
5 Scalar Visualization
Scalar datasets, or scalar fields, represent functions f : D → R, where D is usually a subset of R2
or R3
.
This chapter presents most popular scalar visualization techniques: color mapping, contouring, and height
plots.
5.1 Color Mapping
Color mapping associates a color with every scalar value. For every point of the domain if interest D, color
mapping applies a function c : R → Colors that assigns to that point a color c(s) ∈ Colors which depends
on the scalar value s at that point. There are two of the most common forms to define such a scalar-to-color
function c: color look-up tables and transfer functions. Color look-up table C, also called a colormap,
is a uniform sampling of the color-mapping function c:
C = {ci}i=1..N , where ci = c
(N − i)fmin + ifmax
N
.
In practice equation above is implemented as a table of N colors c1, ..., cN , which are associated with
the scalar dataset values f, assumed to be in the range [fmin, fmax]. The colors ci with low indices i in the
colormap represent low scalar values close to fmin, whereas colors with indices close to N in the colormap
represent high scalar values close to fmax.
This works well, when we can determine this range in advance of the visualization. Imagine an application
where we want to visualize a time-dependent scalar field f(t) with t ∈ [tmin, tmax]. If we do not know f(t)
for all values t before we start the visualization, we cannot compute the absolute scalar range [fmin, fmax].
In such situations, a better solution might be to normalize the scalar range separately for every time frame
f(t). This implies drawing different color legends for every time frame as well.
Besides using a sampled scalar-to-color function to a discrete look-up table, one can also define the
function c analytically, if desired. This is done by defining three scalar functions cR : R → R, cG : R →
R, and cB : R → R, whereby c = (cR, cG, cB). The functions cR, cG and cB are also called transfer
functions. In practice, one uses predefined look-up tables when there is no need to change the individual
colors at runtime and transfer functions when the investigation goals require dynamically changing the
color-mapping functions.
5.2 Designing Effective Colormaps
A color-mapping visualization is effective if, by looking at the generated colors we can easily and accurately
make statements about the original scalar dataset that was color-mapped. Different types of statements and
analysis goals require different types of colormaps. Such goals include the following:
1. Absolute values: Tell the absolute data values at all points in the displayed dataset.
1
2. Value ordering: Given two points in the displayed dataset, tell which of the corresponding two data
values is greater.
3. Value difference: Given two points in the displayed dataset, tell what is the difference of data values
at these points.
4. Selected values: Given a particular data value finterest (or a compact interval of data values), tell
which points in the displayed data take the respective value finterest.
5. Value change: Tell the speed of change, or first derivative of the data values at given points in the
displayed dataset.
Designing a colormap that achieves all above goals in equal measure is a challenging task.
Color legends are required for any application of color mapping where we require to map a color to a
data-related quantity. We must be able to mentally invert the color-mapping function c; that is, look at a
color of some point in the visual domain DV and tell its scalar value f - we must know the color mapping
function. In practice this is achieved by drawing a so-called color legend. This mechanism has some
conditions succeed. First, the color-mapping function c must be invertible, this requires the function to be
injective, meaning that every scalar value in the range [fmin, fmax] is associated with a unique color. It is
not sufficient that the colors have different numerical values. We must also be able to easily perceive them
visually as being different. Second, the spatial resolution of the visualized dataset must be high enough as
compared to the speed of variation of the scalar data f.
Rainbow colormap is a blue-to-red colormap, where blue suggests low values, whereas red suggests
high values. These are one of the widest used colormap types, however the rainbow colormap has also several
important limitations:
• Focus: Perceptually, warm colors attract attention more than cold colors (attention attracted more
towards the high data values.) This can be desirable, however if these may not be the values where we
want to focus on.
• Luminance: Luminances of the rainbow colormap entries vary non-monotonically. This leads to users
being potentially attracted more to certain colors than to others, and/or perceiving certain color
contrasts as being more important that others.
• Context: Hues can have application-dependent semantics. Interpretation of the rainbow colormap
as a heat might be a good encoding for temperature dataset, but this association is much harder or
unnatural to do in case of a medical dataset.
• Ordering: The rainbow colormap assumes that users can easily order hues from blue to green to yellow
to red, however we cannot assume that every user will order hues in this particular manner. When
this assumption does not hold, goal 2 is challenged.
• Linearity: Besides the colormap invertibility requirement, visualization applications often also require
a linearity constraint to be satisfied. Some users perceive colors to change “faster” per spatial unit
in the higher yellow-to-red range than in the lower blue-to-cyan range. If a linear signal would be
perceived by the user as nonlinear, with possibly undesirable consequences for goals 3 and 5.
Other colormap designs.
Grayscale: here we map data values f linearly to luminance, or gray value, with fmin corresponding to
black and fmax corresponding to white. The grayscale colormap has several advantages. First, it directly
encodes data into luminance, and thus it has no issues regarding the discrimination of different hues. Second,
color ordering is natural (from dark to bright), which helps goal 2. Rendering grayscale images is less sensitive
to color reproduction variability. However, telling differences between two gray values, or addressing goal 3,
is harder than when using hue-based colormaps.
Two-hue: can be seen as a generalization of the grayscale colormap, where we interpolate between two
colors, rather than between black and white. If the two colors are perceptually quite different (differ both
in hue and in perceived luminance), the resulting colormap allows an easy color ordering, and also produces
a perceptually more linear result than the rainbow colormap. The two-hue colormap uses both hue and
luminance differences of its two defining colors to create a higher dynamic range. However, used on a 3D
shaded surface, the luminance perceived in the final visualization is ambiguous (doe to the colors or due to
the shading of the scene.) A simple way to correct this issue is to use a corrected isoluminant two-hue
2
colormap, where all entries have the same luminance, similar to the luminance-corrected rainbow colormap.
The disadvantage of this design is that less colors can be individually perceived, leading to challenges for
goals 1, 4, and 5.
Heat map: The intuition behind colors of heated body colormap is that they represent the color of an
object heated at increasing temperature values, with black corresponding to low data values, red-orange hues
for intermediate data ranges, and yellow-white hues for the high data values respectively. The heat map uses
a smaller set of hues that a rainbow colormap, but adds luminance as a way to order colors in an intuitive
manner. However, its strong dependence on luminance makes the heat map less suitable for color coding
data on 3D shaded surfaces.
Diverging: (double-ended scale) colormaps are constructed starting from two typically isoluminant hues.
However, we now add a third color cmid for the data value fmid = (fmin + fmax)/2 located in the middle of
the considered data-range, and use two piecewise-linear interpolations between cmin and cmid, and between
cmid and cmax. Additionally, cmid is chosen so that it has a considerably higher luminance than cmin and
cmax. Diverging colormaps are good for situations where we want to emphasize the deviation of data values
from the average value fmid, and effectively support the task of assessing value differences.
Zebra colormap: In some applications, we want to emphasize the variations of the data rather than
absolute data values (goal 5). This is useful when we are interested in detecting the dataset regions where
data changes the most quickly or, stays constant. A solution is to use a colormap on the scalar dataset f
containing two or more alternating colors that are perceptually very different. If we use a colormap that
maps the value range to an alternating pattern of black and white colors, we obtain the zebra-like result
shown in Figure 5.3(b). Thin, dense stripes indicate regions of high variation speed of the function, whereas
the flat areas in the center and at the periphery of the image indicate regions of slower variation. WE can
now also see better which are the regions where the data has similar rates of variation across our domain.
Interpolation issues.
Results of texture-based color mapping are better, since the texture-based method interpolates the scalar
value s, and then applies the color mapping c separately at every rendered pixel.
c(p) = c
i
siφi(p) .
In contrast, the vertex-based color mapping c applies color mapping at vertices only, and then interpolates
the mapped colors themselves.
c(p) =
i
c(si)φi(p).
A better solution is to store for each vertex pi the value si−smin
smax−smin
,, i.e., our scalar values normalized
in a range of [0, 1], as a texture coordinate. Next, we render our surface cells textured with 1D texture
containing the colormap. 1D textures are described by a single texture coordinate and are stored as a 1D
image, i.e., a vector of colored pixels. Texture-based color mapping produces reasonable results even for
a sparsely sampled dataset. However, this technique requires rendering texture-based polygons, an option
that may not be available on low-end graphics hardware or in situations when we want to use the available
texture for other purposes.
Color banding.
Choosing a small number of colors N would inevitably lead to the color banding effect. It is usually
not desirable, as it creates discrete visual artifacts in the rendered image that do not reflect any of the input
data. Color banding can be avoided by increasing the number of colors in the colormap. The more quickly
the data varies spatially, the smother the colormap has to be to avoid color banding.
Additional issues. Selecting an optimal colormap is further influenced by a number of additional
considerations, as follows:
• Geometry: Not all colors are equally strongly perceived when displayed on surfaces that have the same
area. Perceiving a color accurately is strongly influenced by the colors displayed in neighboring areas.
Densely packed colors are not perceived separately under a certain spatial scale, but blended together
by the human eye. This becomes an issue when the areas in question are very small, such as when
color mapping a point cloud.
3
• User group: 6 to 10 percent of all men would not be able to correctly separate red from green (or larger
variety of hues). This and other forms of colorblindness should be taken into account when designing
colormaps in critical applications intended to be used by a large public.
• Medium: It is hard to design rich colormaps, containing more than a dozen colors that look the same
on all the devices. A common mistake in practice is to use hue-based colormaps, such as rainbow
colormap, and display the resulting visualization on luminance-based devices, such as black-and-white
printed material.
• Contrast: Some techniques work on the colors of an existing image: modifying the gray values, or
colors in order to emphasize details of interest. Such techniques are discussed in the context of image
processing.
Designing effective colormap involves knowledge of the application domain conventions, typical data
distribution, visualization goals, general perception theory, intended output devices, and the user preferences.
There is no universally effective colormap.
5.3 Contouring
Color banding is related to fundamental and widely used visualization techniques called contouring.
Points located on such a color border, drawn in black in Figure 5.7(a), are called a contour line, or
isoline. Formally, a contour line C is defined as all points p in a dataset D that have the same scalar value,
or isovalue s(p) = x, or
C(x) = {p ∈ D|s(p) = x}.
We can actually combine the advantages of contours and color mapping by simply drawing contours for
all values of interest on top of a color-mapped image that uses a rich colormap. Besides indicating points
where the data has specific values, contours can also tell us something about the data variation itself. Areas
where contours are closer to each other, indicate higher variations of the scalar data. The scalar distance
between consecutive contours (which is constant) divided by the spatial distance between the same contours
(which is not constant) is exactly the derivative of the scalar signal.
Contour properties. First, isolines can be either closed curves, they never stop inside the dataset
itself - they either close upon themselves, or stop when reaching the dataset border. Second, isoline never
intersects (crosses) itself, nor does it intersect an isoline for another scalar value.
Contours are perpendicular to the gradient of the contoured function. The gradient is the direction of
the function’s maximal variation, whereas contours are points of equal function value, so the tangent to a
contour is the direction of the function’s minimal (zero) variation.
Computing contours. Since dataset mathcalD is defined as a set of cells carrying node or cell scalar
data, plus additional basis functions, being an intersection of the function graph with a horizontal plane, the
isoline of such dataset is piecewise linear and has topological dimension 1, this is a polyline.
For every edge e = (pi, pj) of the cell c in dataset we test whether the isoline value v is between the scalar
vertex attributes vi and vj corresponding to the edge end points pi and pj. If the test succeeds, the isoline
intersects e at a point
q =
pi(vj − v) + pj(v − vi)
vj − vi
.
We repeat the previous procedure for all edges of our current cell and finally obtain a set of intersection
point S = {qi} of the isoline with the cell. Next, we must connect these points together to obtain the actual
isoline. Since we know that the isoline is piecewise linear over a cell, we can use line segments to connect
these points. If S contains more than 2 points, as illustrated in Figure 5.11 for the quad cell which has
4 intersection points, there are two possibilities for connecting the four intersection points: creating two
separate loops, or creating a single contour loop. When we use triangular mesh, such ambiguities do not
exist, because it is shifted now to the quad-splitting process: splitting diagonally from the lower-left to the
upper-right, or from the upper-left to the lower-right vertex.
Contouring needs to have at least piecewise linear, C0
dataset as input. This means that we cannot
directly contour image data, for example, which is actually a uniform grid with piecewise constant interpo-
lation, and thus represents a discontinuous function. Resampling image datasets to piecewise linear datasets
4
can be done, however this has the hidden effect of changing the continuity assumptions on the dataset
from piecewise constant to piecewise linear. While this should not pose problems in many cases, there are
situations when such changes can lead to highly incorrect visualizations.
The most popular methods that reduce the number of operations done per cell, is the marching squares
method, which works on 2D datasets with quad cells, and its marching cubes variant which works on 3D
datasets with hexahedral cells.
5.3.1 Marching Squares
The method begins by determining the topological state of the current cell with respect to the isovalue. A
quad cell has, thus, 24
= 16 different topological states. The state of a quad cell can be represented by a
4-bit integer index, where each bit stores the inside/outside state of a vertex. This integer can be used to
index a case table (see figure 5.12.
The marching squares algorithm constructs independent line segments for each cell, which are stored in
an unstructured dataset type, given that isolines have no regular structure with respect to the grid they are
computed on. A useful post processing step is to merge the coincident end points of line segments originating
from neighbor grid cells that share an edge. Besides decreasing the isoline dataset size, this also creates a
dataset on which operations such as computing vertex data from cell data via averaging is possible.
5.3.2 Marching Cubes
The algorithm accepts 3D instead of 2D scalar datasets and generates 2D isosurfaces instead of 1D isolines.
Since a hex cell has 28
= 256 different topological cases, in practice, this number is reduced to only 15 by
using symmetry considerations. Marching cubes generates a set of polygons for each contoured cell, which
includes triangles, quads, pentagons, and hexagons. Resulting 3D isosurface is saved as an unstructured
dataset. An additional step needed for marching cubes is the computation of isosurface normals. These are
needed for smooth shading. Normals can be computed by normal averaging. Alternatively, since isosurfaces
are orthogonal to the scalar data gradient at each point, normals can be directly computed as being the
normalized gradients of the scalar data.
As a general rule, most isosurface details that are under or around the size of the resolution of the
isosurfaced dataset can be either actual data or artifacts, and should be interpreted with great care.
We can say, that the slicing and contouring operations are commutative.
Marching algorithm variations. There exist similar algorithms to marching cubes for all cell types,
such as lines, triangles, and tetrahedra. These algorithms can treat all grid types, as long as all encountered
cell types supported. All variants produce unstructured grids.
Isosurfaces can be also generated and rendered using point-based techniques.
Dividing cubes algorithm. A classical algorithm for generating and rendering isosurfaces using clouds
is dividing cubes. It works for 3D uniform and rectilinear grids (box shaped cells). Dividing cubes approx-
imates the isosurface using constant basis functions, whereas marching cubes uses linear basis functions.
For dividing cubes, the ideal minimal cell size is that of a screen pixel, in which case every point primitive
is of pixel size and there are no visual gaps or other rendering artifacts in the rendered isosurface.
5.4 Height Plots
Height plots, also called elevation or carpet plots can be described by the mapping operation, given a
two-dimensional surface Ds ∈ D, part of a scalar dataset D:
m : D∫ → D, m(x) = x + s(x)n(x), ∀x ∈ D∫ ,
where s(x) is the scalar value of D at the point x and n(x) is the normal to the surface D∫ at x. The hight
plot mapping operation “warps” a given surface D∫ included in the dataset along the surface normal, with
a factor proportional to the scalar value.
Most common variant of height plots warps a planar surface D∫ . However, we can produce height plots
starting from different basis surfaces Ds, such as torus, shown in Figure 5.18. In this image, both the height
and the color encode the scalar value: the two visual cues straighten each other to convey the scalar value
5
information. If desired, one can encode two different scalar values with a height plot, one into the plot’s
height and the other into the plot’s color.
5.4.1 Enridged Plots
The height plots, color mapping, and contouring techniques have their own strengths and limitations:
• Height plots are easy, to learn, intuitive, generate continuous images, and show the local gradient of
data. However, quantitative information can be hard to extract from such plots, it is not easy to tell
which peak is the highest. Also, 3D occlusion effects can occur.
• Color mapping share advantages of height plots and do not suffer from 3D occlusion problems. How-
ever, making quantitative judgments based on color data can be hard, and requires carefully designed
colormaps.
• Contour plots are effective in communication precise quantitative values. However, such plots are less
intuitive to use. Also, they do not create a dense, continuous, image.
Combining height plots, contour plots, and color mapping alleviates some but not all, of the problems of
each plot type.
An alternative solution is proposed by ** enridged contour maps.** The idea is to use a non-linear
mapping given by
z(x, y) = sf(x, y) + sh g
f(x, y)mod h
h
,
instead of the linear mapping z(x, y) = sf(x, y), where s > 0 is the plot’s scaling factor, g(u) = au(1 − u) is
a parabolic function. Effect is to add parabolic bumps of height a to consecutive intervals in the range of f
of size h.
Enridged plots combine the appearance of contour plots and height plots. The nested cushion-like shapes
that emerge in this type of plot convey a sensation of height which is much stronger than in classical height
plots.
Enridged plots can be also extended to use a hierarchy of contour levels, by using two instances of
parameter-pairs (h1, a1) (larger intervals) and (h2, a2) (denser contours) such that a2 < a1 and h1 = kh2.
Enridged plots can be further tuned to emphasize the nesting relationship between different regions. For
this we simply replace the symmetric parabolic profile g(u) = u(1−u) by an asymmetric one g(u) = u(1−u)n
,
which increases sharply close to u = 0 and is relatively flat close to u = 1. The image now resembles the
structure of Venn-Euler diagram. These structures can be useful for visualizing data hierarchies in different
contexts, beyond scalar fields.
5.5 Conclusion
We presented a number of fundamental methods for visualizing scalar data: color mapping, contouring,
slicing, and height plots. Color mapping assigns a color as a function of the scalar value at each point of
a given domain. Contouring displays all points within a given two- or three-dimensional domain that have
a given scalar value. Height plots deform the scalar dataset domain in a given direction as a function of
the scalar data. The main advantages of these techniques are that they produce intuitive results, easily
understood by users, and they are simple to implement. However, such techniques also have s number of
restrictions.
What have you learned in this chapter?
This chapter presents most popular scalar visualization techniques: color mapping, contouring, and height
plots. It talks about design of effective colormaps: important design decisions pertaining to the construction
of colormaps, contouring in two and three dimensions. Marching squares and marching cubes algorithms are
well explained; strengths and limitations of standard techniques are outlined.
What surprised you the most?
Specific colormaps for categorical datasets have been designed. An excellent online resource that allows
choosing a categorical colormap based on data properties and task types is ColorBrewer. It is also surprising
to know how we can draw more than a single isosurface of the same dataset in one visualization. The process
6
of rendering several nested semitransparent isosurfaces that correspond to a discrete sequence of isovalues
can be generalized to the continuous case, as we shall see with the volume rendering technique in Chapter
10.
What applications not mentioned in the book you could imagine for the techniques ex-
plained in this chapter?
Perhaps, we can visualize probabilities as height plots or scalar fields with isolines and colors mapped to
the scalar value. Besides a visual clue, those images (datasets) can also suggest shortest paths from, i.e.,
arbitrary points A and B situated on top of the surface of the same plot.
1. EXERCISE 1
Consider the simple scalar visualization of a 2D slice from a human head scan shown in the figure below.
Here, six different colormaps are used. Explain, for each of the sub-figures, what are the advantages (if any)
and disadvantages (if any) of the respective colormap.
Figure 1: Scalar visualization of a 2D slice dataset using six differentcolormaps (see chapter 5).
Hints: Consider what types of structures you can easily see in one visualization as compared to another
visualization.
Table 1: Advantages and disadvantages of different colormaps.
Colormap
type
Advantages Disadvantages
rainbow (a) We can relatively easy distinguish the
harder tissues (color mapped to yellow, or-
ange, and red), and the air (color mapped
to dark blue)
Focus: warm colors attract more attention
that the cold colors. Depending on applica-
tion, these may not be the values where we
want to focus on. Linearity: Colors change
faster per spatial unit in the higher yellow-
to-red range than in the lower blue-to-cyan
range. Linear signal would be perceived as
nonlinear.
7
Continuation of Table 1
Colormap
type
Advantages Disadvantages
grayscale (b) Grayscale directly encodes data into lumi-
nance, and thus it has no issues regarding
the discrimination of different hues. Col-
ormaping is natural (from dark to bright),
which helps goal 2. Rendering grayscale
images is less sensitive to color reproduc-
tion variability issues on a range of devices.
Focus: warm colors attract more attention
that the cold colors. Depending on applica-
tion, these ma not be the values where we
want to focus on. Linearity: Colors change
faster per spatial unit in the higher yellow-
to-red range than in the lower blue-to-cyan
range. Linear signal would be perceived as
nonlinear.
two-color (c) If two colors are perceptually quite differ-
ent in hue but also in luminance, the result-
ing colormap allows an easy color order-
ing, and produces more linear result than
the rainbow colormap. In case none of two
colors is too bright, it is suitable for non-
compact data displayed on a white back-
ground.
It arguably offers less dynamic range.
While using this colormap on 3D shaded
surface, the luminance perceived in the fi-
nal visualization is ambiguous.
heat map (d) Compared to rainbow colormap it uses a
smaller set of hues, but adds luminance as
a way to order colors in an intuitive man-
ner. Allows one to discriminate between
more data values, than the two-hue col-
ormap, since it uses more hues.
Its strong dependence on luminance makes
it less suitable for color coding data on 3D
shaded surfaces, just as the two-hue non-
isoluminant colormaps.
diverging (e) This can be interpreted as a temperature
colormap, where the average value (white)
is considered neutral, and we want to em-
phasize points which are colder, respec-
tively warmer, than this value.
*
diverging (f) Good for situations where we want to em-
phasize the deviation of data values from
the average value fmid, and also effectively
support the task of assessing value differ-
ences. Since the maximum luminance of
this colormap is lower than that of pure
white, we can use this colormap also on
non-compact data domains. More hues
than in two-hue colormaps allow one to dis-
criminate between more data values.
*
End of Table
2. EXERCISE 2
Consider an application where we use color-mapping to visualize a scalar field defined on a simple 2D
uniform grid consisting of quad cells. The scalar field values, recorded at the cell vertices, are in the range
[s {min},s {max}]. Next to the color-mapped grid, we show, as usual, a color legend, where the two extreme
colors (at the endpoints of the color legend) correspond to the values smin and smax, respectively. Given
this set-up, can we say for sure that any color shown in the color legend will appear in at least one point
(pixel) of the color-mapped grid? If so, argue why. If not, detail at least one situation when this will not
occur.
Yes, it may happen in case in isoluminant clormaps or in case of texture-based colormaps, which produce
pixel-accurate renderings - reconstructions of our given sampled signal. Scalar fields produce such legends
too. It also depends on color interpolations done by graphics hardware: most widespread vertex-based
8
color-mappings produce suboptimal results for dataset regions where the data varies too quickly.
No, in case if resolution of the output image is low which affects our visualization in a way, that non-linear
hues on color legend start being perceived as misleading. Images that exhibit color banding can have fewer
hues than the color legend, displayed as continuous strip.
3. EXERCISE 3
A simple, though not perfect, way to create 2D contours or isolines, is to use a so-called delta colormap.
Given a scalar contour-value (or isovalue) a, and a dataset with the scalars in the range [m, M], a delta
colormap maps all scalar values in the ranges [m, a − d] and [a + d, M] to one color, and the values in the
range [a − d, a + d] to another color. Here, d is typically very small as compared to the scalar range M − m.
Consider now the application of this delta colormap to a scalar field stored on a simple uniform 2D grid
having quad cells, as compared to drawing the equivalent isoline on the same grid. Detail at least two
drawbacks of the delta colormap as compared to the isoline visualization. Next, explain which parameters
we could fine-tune (e.g., value of d, sampling rate of the grid, color-interpolation techniques used on the grid
cells, or other parameters) in order to improve the quality of the delta-colormap- based visualization.
Drawbacks of applying delta colormap to the scalar field on uniform 2D grid with quad cells:
• Delta colormap on a scalar field does not draw the isoline itself, it highlights a sub-cloud of points,
which are close in terms of their scalar values. This does not tell us much about topology.
• Some bright hues can appear invisible on while backgrounds, causing a loss of visual information
depicted.
It is better to use isoluminant colors for delta colormap. It is also good to keep parameter d adjusted to
sampling resolution of the grid to help marching algorithms.
Contouring needs to have at least piecewise linear datasets as input. We can not directly contour the image
data, when scalar field represents piecewise constant interpolation of a discontinuous function. Resampling
to piecewise-linear dataset and changing the continuity assumptions on the dataset from piecewise-constant
to piecewise-linear can lead to incorrect visualizations.
4. EXERCISE 4
Contouring can be implemented by various algorithms. In this context, the marching cubes algorithm
(and its variations such as marching squares or marching tetrahedra) propose several optimizations. Com-
pared to a ‘naive’ implementation of contouring, that does not use the marching technique, what are the
main computational advantages of the marching technique?
In naive implementation for each cell we must test whether the isovalue intersects every cell edge, and,
if so, compute the exact intersection location. * marching algorithms reduces number of operations done
per cell * computes edge-isovalue intersections only on those edges that are known to be intersected for
a specific topological state. * besides decreasing the isoline dataset size, unstructured dataset produced
supports operations such as computing vertex data from cell data via averaging.
5. EXERCISE 5
Consider a 2D constant scalar field, f(x, y) = k, for all values of (x, y). What will be the result of
applying the marching squares technique on this field for an isovalue equal to k? Does the result depend
on the cell type or dimension – for instance, do we obtain a different result if we used marching triangles
on a 2D unstructured triangle-grid rather than marching squares on a 2D quad grid? Hints: Consider the
vertex-coding performed by the marching algorithms.
Marching squares will produce a contour line, which crosses each edges of the cells in our dataset.
Marching triangles have no ambiguous cases, but they will also produce the same result as marching squares.
Isolines have no structure with respect to the grid they are computed on.
6. EXERCISE 6
9
An isoline, as computed by marching squares, is represented by a 2D polyline, or set of line-segments.
Can two such line segments from the same isoline intersect (we do not consider segments that share endpoints
to intersect). Can two such line segments from different isolines of the same scalar field intersect? If your
answer is yes, sketch the situation by showing the quad grid, vertex values, isoline segments, and their
intersection. If your answer is no, argue it based on the implementation details of marching squares.
An isoline never intersects (crosses) itself, nor does it intersect an isoline for another scalar value.
7. EXERCISE 7
Consider a two-variable scalar function z = f(x, y) which is differentiable everywhere. Figure 5.9 (also
displayed below) shows that the gradient of the function is a vector which is locally orthogonal to the contours
(or isolines) of the function. Give a mathematical proof of this.
Hints: Consider the tangent vector to the contour at a point (x, y), and express the variation (increase
or decrease) of the function’s value in a small neighborhood around the respective point.
Whereas contours are points of equal function value, so the tangent to a contour is the direction of the
function’s minimal (zero) variation: ∆f = 0. The gradient of the function is the direction of the maximal
variation.
8. EXERCISE 8
Contouring and slicing are commutative operation, in the sense that slicing a 3D isosurface with a plane
yields the same result as contouring the 2D data-slice obtained by slicing the input 3D scalar volume with
the same plane. Consider now the set of 2D contours obtained from a 3D scalar volume by applying the
2D marching squares algorithm on a set of closely and uniformly spaced 2D data-slices extracted from that
volume. Describe (in as much detail as possible) how you would use this set of 2D contours to reconstruct
the corresponding 3D isosurface.
If the set is dense, i.e., the slice panels are close to each other, and the dataset does not exhibit to-sharp
variations, we can connect points on the isolines from the previous and next slice and construct the 3D
isosurface.
9. EXERCISE 9
Consider an application in which the user extracts an isosurface, and would like to visualize how thick the
resulting shape is, using e.g. color mapping. That is, each vertex of isosurface’s unstructured mesh should
be assigned a (positive) scalar value indicating how thick the shape is at that location. For this problem:
• Propose a definition of shape thickness that would match our intuitive understanding (you can support
your explanation by drawings, if needed)
• Propose an algorithm to compute the above definition, given an isosurface stored as an unstructured
mesh
• Discuss the computational complexity of your proposed algorithm
• Discuss the possible challenges of your proposal (e.g., configurations or shapes for which your definition
would not deliver the expected result, and/or the algorithm would not work correctly with respect to
your definition).
Hints:
• Consider first the related problem of computing the thickness of a 2D contour, before moving to the
3D case.
• Consider first that the contour is closed, before treating the case when it is open because it reaches
the boundary of the dataset.
Thickness of a contour generally differs depending on the direction of measurement.
I propose to define the thickness in the direction of inverted normal. At each sample point pi we compute
a distance d between a sample point itself and the point e where inverted normal n exits a closed contour.
10
The length of a vector between pi and e can be derived from pi − d × n = e. Once we detected the point e,
we can compute d = ||pi − e|| and store the value at the sample point pi.
Vertex normal values can be computed as the area-weighted average of the polygon normals for all the
cells that use a given vertex using formula:
nj =
A(Cj)n(Ci)
A(Ci)
,
where A(Ci) is the area of the cell Ci, n is a cell’s normal.
Implementing an unstructured grid implies storing coordinates of a collection of all sample points {pj}
and then storing all vertex indices for all cells {ci}. These indices are usually stored as integer offsets in the
grid sample point list {pj}.
In order to calculate coordinate-vector e we use the following steps:
• ∀ cells ci where pj is a vertex we compute normals and compute vertex normal nj by averaging
• for each vertex and its normal we cast a ray in the direction of inverse normal, starting a loop with a
small number d >= 0 and increasing d with each iteration, until point-vector e = pj − d × nj appears
to belong to at least one of the cells in our unstructured contour mesh.
• once we have obtained such a scalar d, we store vector-point e in the vertex pj. We can always easily
compute and use ||e|| as a measure of contour’s thickness in the direction of the normal.
10. EXERCISE 10
One of the challenges of practical use of isosurfaces in 3D is selecting an ‘interesting’ scalar value, for which
one obtains an ‘interesting’ isosurface. We cannot solve this challenge in general, since different applications
may have different definition of what an interesting isosurface is. However, consider an application where you
want to automatically select a small number of isovalues (say, 5) and display these to give a quick overview
of the data changes. How would you automatically select these isovalues? Hints: Define, first, what you
consider to be the interesting structures in the data, e.g., areas of rapid scalar variation, or areas where an
isosurface changes topology).
I would pick 5 numbers automatically by dividing scalar range = [min, max] with isovalues as follows:
i1 = min+1/3 range. Then, I would pick second isovalue inside the largest interval [i1, max] as i2 = i1 +1/3·
(max−i1). Third isovalue I would pick as i3 = i2+1/3 (max−i2)], forth and fifth as i4 = max−2/3 (max−i3)
and i5 = min + 2/3 (i1 − min) accordigly.
11

More Related Content

What's hot

DIGITAL IMAGE PROCESSING - LECTURE NOTES
DIGITAL IMAGE PROCESSING - LECTURE NOTESDIGITAL IMAGE PROCESSING - LECTURE NOTES
DIGITAL IMAGE PROCESSING - LECTURE NOTESEzhilya venkat
 
Lecture 06 geometric transformations and image registration
Lecture 06 geometric transformations and image registrationLecture 06 geometric transformations and image registration
Lecture 06 geometric transformations and image registrationobertksg
 
Perception in artificial intelligence
Perception in artificial intelligencePerception in artificial intelligence
Perception in artificial intelligenceMinakshi Atre
 
Image feature extraction
Image feature extractionImage feature extraction
Image feature extractionRushin Shah
 
COMPUTER GRAPHICS-"Projection"
COMPUTER GRAPHICS-"Projection"COMPUTER GRAPHICS-"Projection"
COMPUTER GRAPHICS-"Projection"Ankit Surti
 
Raster scan systems with video controller and display processor
Raster scan systems with video controller and display processorRaster scan systems with video controller and display processor
Raster scan systems with video controller and display processorhemanth kumar
 
Icon based visualization techniques
Icon based visualization techniquesIcon based visualization techniques
Icon based visualization techniquesWafaQKhan
 
Raster animation
Raster animationRaster animation
Raster animationabhijit754
 
Projection In Computer Graphics
Projection In Computer GraphicsProjection In Computer Graphics
Projection In Computer GraphicsSanu Philip
 
Raster scan and random scan
Raster scan and random scanRaster scan and random scan
Raster scan and random scanKABILESH RAMAR
 

What's hot (20)

DIGITAL IMAGE PROCESSING - LECTURE NOTES
DIGITAL IMAGE PROCESSING - LECTURE NOTESDIGITAL IMAGE PROCESSING - LECTURE NOTES
DIGITAL IMAGE PROCESSING - LECTURE NOTES
 
Rgb and cmy color model
Rgb and cmy color modelRgb and cmy color model
Rgb and cmy color model
 
Spline representations
Spline representationsSpline representations
Spline representations
 
Computer graphics realism
Computer graphics realismComputer graphics realism
Computer graphics realism
 
Color Models
Color ModelsColor Models
Color Models
 
Lecture 06 geometric transformations and image registration
Lecture 06 geometric transformations and image registrationLecture 06 geometric transformations and image registration
Lecture 06 geometric transformations and image registration
 
Perception in artificial intelligence
Perception in artificial intelligencePerception in artificial intelligence
Perception in artificial intelligence
 
Image feature extraction
Image feature extractionImage feature extraction
Image feature extraction
 
Color image processing
Color image processingColor image processing
Color image processing
 
Foundation of A.I
Foundation of A.IFoundation of A.I
Foundation of A.I
 
Depth Buffer Method
Depth Buffer MethodDepth Buffer Method
Depth Buffer Method
 
COMPUTER GRAPHICS-"Projection"
COMPUTER GRAPHICS-"Projection"COMPUTER GRAPHICS-"Projection"
COMPUTER GRAPHICS-"Projection"
 
fractals
fractalsfractals
fractals
 
Raster scan systems with video controller and display processor
Raster scan systems with video controller and display processorRaster scan systems with video controller and display processor
Raster scan systems with video controller and display processor
 
Icon based visualization techniques
Icon based visualization techniquesIcon based visualization techniques
Icon based visualization techniques
 
Raster animation
Raster animationRaster animation
Raster animation
 
lecture4 raster details in computer graphics(Computer graphics tutorials)
lecture4 raster details in computer graphics(Computer graphics tutorials)lecture4 raster details in computer graphics(Computer graphics tutorials)
lecture4 raster details in computer graphics(Computer graphics tutorials)
 
Projection In Computer Graphics
Projection In Computer GraphicsProjection In Computer Graphics
Projection In Computer Graphics
 
Raster scan and random scan
Raster scan and random scanRaster scan and random scan
Raster scan and random scan
 
Task programming
Task programmingTask programming
Task programming
 

Similar to 05 Scalar Visualization

User Interactive Color Transformation between Images
User Interactive Color Transformation between ImagesUser Interactive Color Transformation between Images
User Interactive Color Transformation between ImagesIJMER
 
Colorization of Gray Scale Images in YCbCr Color Space Using Texture Extract...
Colorization of Gray Scale Images in YCbCr Color Space Using  Texture Extract...Colorization of Gray Scale Images in YCbCr Color Space Using  Texture Extract...
Colorization of Gray Scale Images in YCbCr Color Space Using Texture Extract...IOSR Journals
 
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
 
Graph Coloring and Its Implementation
Graph Coloring and Its ImplementationGraph Coloring and Its Implementation
Graph Coloring and Its ImplementationIJARIIT
 
Data Display and Cartography-I.pdf
Data Display and Cartography-I.pdfData Display and Cartography-I.pdf
Data Display and Cartography-I.pdfAliAhmad587156
 
Application of interpolation in CSE
Application of interpolation in CSEApplication of interpolation in CSE
Application of interpolation in CSEMd. Tanvir Hossain
 
Color Restoration of Scanned Archaeological Artifacts with Repetitive Patterns
Color Restoration of Scanned Archaeological Artifacts with Repetitive PatternsColor Restoration of Scanned Archaeological Artifacts with Repetitive Patterns
Color Restoration of Scanned Archaeological Artifacts with Repetitive PatternsGravitate Project
 
Image parts and segmentation
Image parts and segmentation Image parts and segmentation
Image parts and segmentation Rappy Saha
 
Image colorization
Image colorizationImage colorization
Image colorizationPankti Fadia
 
PERFORMANCE EVALUATION OF DIFFERENT TECHNIQUES FOR TEXTURE CLASSIFICATION
PERFORMANCE EVALUATION OF DIFFERENT TECHNIQUES FOR TEXTURE CLASSIFICATION PERFORMANCE EVALUATION OF DIFFERENT TECHNIQUES FOR TEXTURE CLASSIFICATION
PERFORMANCE EVALUATION OF DIFFERENT TECHNIQUES FOR TEXTURE CLASSIFICATION cscpconf
 
Image Processing
Image ProcessingImage Processing
Image ProcessingTuyen Pham
 
A STUDY AND ANALYSIS OF DIFFERENT EDGE DETECTION TECHNIQUES
A STUDY AND ANALYSIS OF DIFFERENT EDGE DETECTION TECHNIQUESA STUDY AND ANALYSIS OF DIFFERENT EDGE DETECTION TECHNIQUES
A STUDY AND ANALYSIS OF DIFFERENT EDGE DETECTION TECHNIQUEScscpconf
 

Similar to 05 Scalar Visualization (20)

User Interactive Color Transformation between Images
User Interactive Color Transformation between ImagesUser Interactive Color Transformation between Images
User Interactive Color Transformation between Images
 
Colorization of Gray Scale Images in YCbCr Color Space Using Texture Extract...
Colorization of Gray Scale Images in YCbCr Color Space Using  Texture Extract...Colorization of Gray Scale Images in YCbCr Color Space Using  Texture Extract...
Colorization of Gray Scale Images in YCbCr Color Space Using Texture Extract...
 
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
 
Graph Coloring and Its Implementation
Graph Coloring and Its ImplementationGraph Coloring and Its Implementation
Graph Coloring and Its Implementation
 
regions
regionsregions
regions
 
Data Display and Cartography-I.pdf
Data Display and Cartography-I.pdfData Display and Cartography-I.pdf
Data Display and Cartography-I.pdf
 
MATLAB
MATLABMATLAB
MATLAB
 
Color_Spaces.pptx
Color_Spaces.pptxColor_Spaces.pptx
Color_Spaces.pptx
 
Application of interpolation in CSE
Application of interpolation in CSEApplication of interpolation in CSE
Application of interpolation in CSE
 
Color Restoration of Scanned Archaeological Artifacts with Repetitive Patterns
Color Restoration of Scanned Archaeological Artifacts with Repetitive PatternsColor Restoration of Scanned Archaeological Artifacts with Repetitive Patterns
Color Restoration of Scanned Archaeological Artifacts with Repetitive Patterns
 
Image parts and segmentation
Image parts and segmentation Image parts and segmentation
Image parts and segmentation
 
Psuedo color
Psuedo colorPsuedo color
Psuedo color
 
Image colorization
Image colorizationImage colorization
Image colorization
 
Ijcnc050213
Ijcnc050213Ijcnc050213
Ijcnc050213
 
PERFORMANCE EVALUATION OF DIFFERENT TECHNIQUES FOR TEXTURE CLASSIFICATION
PERFORMANCE EVALUATION OF DIFFERENT TECHNIQUES FOR TEXTURE CLASSIFICATION PERFORMANCE EVALUATION OF DIFFERENT TECHNIQUES FOR TEXTURE CLASSIFICATION
PERFORMANCE EVALUATION OF DIFFERENT TECHNIQUES FOR TEXTURE CLASSIFICATION
 
Ggplot2 ch2
Ggplot2 ch2Ggplot2 ch2
Ggplot2 ch2
 
Image Processing
Image ProcessingImage Processing
Image Processing
 
A STUDY AND ANALYSIS OF DIFFERENT EDGE DETECTION TECHNIQUES
A STUDY AND ANALYSIS OF DIFFERENT EDGE DETECTION TECHNIQUESA STUDY AND ANALYSIS OF DIFFERENT EDGE DETECTION TECHNIQUES
A STUDY AND ANALYSIS OF DIFFERENT EDGE DETECTION TECHNIQUES
 
Graph coloring Algorithm
Graph coloring AlgorithmGraph coloring Algorithm
Graph coloring Algorithm
 
Id3115321536
Id3115321536Id3115321536
Id3115321536
 

More from Valerii Klymchuk

Sample presentation slides template
Sample presentation slides templateSample presentation slides template
Sample presentation slides templateValerii Klymchuk
 
05 Clustering in Data Mining
05 Clustering in Data Mining05 Clustering in Data Mining
05 Clustering in Data MiningValerii Klymchuk
 
01 Introduction to Data Mining
01 Introduction to Data Mining01 Introduction to Data Mining
01 Introduction to Data MiningValerii Klymchuk
 
04 Classification in Data Mining
04 Classification in Data Mining04 Classification in Data Mining
04 Classification in Data MiningValerii Klymchuk
 
Crime Analysis based on Historical and Transportation Data
Crime Analysis based on Historical and Transportation DataCrime Analysis based on Historical and Transportation Data
Crime Analysis based on Historical and Transportation DataValerii Klymchuk
 
Artificial Intelligence for Automated Decision Support Project
Artificial Intelligence for Automated Decision Support ProjectArtificial Intelligence for Automated Decision Support Project
Artificial Intelligence for Automated Decision Support ProjectValerii Klymchuk
 

More from Valerii Klymchuk (11)

Sample presentation slides template
Sample presentation slides templateSample presentation slides template
Sample presentation slides template
 
Toronto Capstone
Toronto CapstoneToronto Capstone
Toronto Capstone
 
05 Clustering in Data Mining
05 Clustering in Data Mining05 Clustering in Data Mining
05 Clustering in Data Mining
 
01 Introduction to Data Mining
01 Introduction to Data Mining01 Introduction to Data Mining
01 Introduction to Data Mining
 
02 Related Concepts
02 Related Concepts02 Related Concepts
02 Related Concepts
 
03 Data Mining Techniques
03 Data Mining Techniques03 Data Mining Techniques
03 Data Mining Techniques
 
04 Classification in Data Mining
04 Classification in Data Mining04 Classification in Data Mining
04 Classification in Data Mining
 
Crime Analysis based on Historical and Transportation Data
Crime Analysis based on Historical and Transportation DataCrime Analysis based on Historical and Transportation Data
Crime Analysis based on Historical and Transportation Data
 
Artificial Intelligence for Automated Decision Support Project
Artificial Intelligence for Automated Decision Support ProjectArtificial Intelligence for Automated Decision Support Project
Artificial Intelligence for Automated Decision Support Project
 
Data Warehouse Project
Data Warehouse ProjectData Warehouse Project
Data Warehouse Project
 
Database Project
Database ProjectDatabase Project
Database Project
 

Recently uploaded

EMERCE - 2024 - AMSTERDAM - CROSS-PLATFORM TRACKING WITH GOOGLE ANALYTICS.pptx
EMERCE - 2024 - AMSTERDAM - CROSS-PLATFORM  TRACKING WITH GOOGLE ANALYTICS.pptxEMERCE - 2024 - AMSTERDAM - CROSS-PLATFORM  TRACKING WITH GOOGLE ANALYTICS.pptx
EMERCE - 2024 - AMSTERDAM - CROSS-PLATFORM TRACKING WITH GOOGLE ANALYTICS.pptxthyngster
 
Identifying Appropriate Test Statistics Involving Population Mean
Identifying Appropriate Test Statistics Involving Population MeanIdentifying Appropriate Test Statistics Involving Population Mean
Identifying Appropriate Test Statistics Involving Population MeanMYRABACSAFRA2
 
Call Girls In Dwarka 9654467111 Escorts Service
Call Girls In Dwarka 9654467111 Escorts ServiceCall Girls In Dwarka 9654467111 Escorts Service
Call Girls In Dwarka 9654467111 Escorts ServiceSapana Sha
 
Saket, (-DELHI )+91-9654467111-(=)CHEAP Call Girls in Escorts Service Saket C...
Saket, (-DELHI )+91-9654467111-(=)CHEAP Call Girls in Escorts Service Saket C...Saket, (-DELHI )+91-9654467111-(=)CHEAP Call Girls in Escorts Service Saket C...
Saket, (-DELHI )+91-9654467111-(=)CHEAP Call Girls in Escorts Service Saket C...Sapana Sha
 
20240419 - Measurecamp Amsterdam - SAM.pdf
20240419 - Measurecamp Amsterdam - SAM.pdf20240419 - Measurecamp Amsterdam - SAM.pdf
20240419 - Measurecamp Amsterdam - SAM.pdfHuman37
 
ASML's Taxonomy Adventure by Daniel Canter
ASML's Taxonomy Adventure by Daniel CanterASML's Taxonomy Adventure by Daniel Canter
ASML's Taxonomy Adventure by Daniel Cantervoginip
 
From idea to production in a day – Leveraging Azure ML and Streamlit to build...
From idea to production in a day – Leveraging Azure ML and Streamlit to build...From idea to production in a day – Leveraging Azure ML and Streamlit to build...
From idea to production in a day – Leveraging Azure ML and Streamlit to build...Florian Roscheck
 
原版1:1定制南十字星大学毕业证(SCU毕业证)#文凭成绩单#真实留信学历认证永久存档
原版1:1定制南十字星大学毕业证(SCU毕业证)#文凭成绩单#真实留信学历认证永久存档原版1:1定制南十字星大学毕业证(SCU毕业证)#文凭成绩单#真实留信学历认证永久存档
原版1:1定制南十字星大学毕业证(SCU毕业证)#文凭成绩单#真实留信学历认证永久存档208367051
 
Defining Constituents, Data Vizzes and Telling a Data Story
Defining Constituents, Data Vizzes and Telling a Data StoryDefining Constituents, Data Vizzes and Telling a Data Story
Defining Constituents, Data Vizzes and Telling a Data StoryJeremy Anderson
 
Predicting Salary Using Data Science: A Comprehensive Analysis.pdf
Predicting Salary Using Data Science: A Comprehensive Analysis.pdfPredicting Salary Using Data Science: A Comprehensive Analysis.pdf
Predicting Salary Using Data Science: A Comprehensive Analysis.pdfBoston Institute of Analytics
 
Call Us ➥97111√47426🤳Call Girls in Aerocity (Delhi NCR)
Call Us ➥97111√47426🤳Call Girls in Aerocity (Delhi NCR)Call Us ➥97111√47426🤳Call Girls in Aerocity (Delhi NCR)
Call Us ➥97111√47426🤳Call Girls in Aerocity (Delhi NCR)jennyeacort
 
Top 5 Best Data Analytics Courses In Queens
Top 5 Best Data Analytics Courses In QueensTop 5 Best Data Analytics Courses In Queens
Top 5 Best Data Analytics Courses In Queensdataanalyticsqueen03
 
Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...
Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...
Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...dajasot375
 
Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...
Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...
Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...Jack DiGiovanna
 
Effects of Smartphone Addiction on the Academic Performances of Grades 9 to 1...
Effects of Smartphone Addiction on the Academic Performances of Grades 9 to 1...Effects of Smartphone Addiction on the Academic Performances of Grades 9 to 1...
Effects of Smartphone Addiction on the Academic Performances of Grades 9 to 1...limedy534
 
MK KOMUNIKASI DATA (TI)komdat komdat.docx
MK KOMUNIKASI DATA (TI)komdat komdat.docxMK KOMUNIKASI DATA (TI)komdat komdat.docx
MK KOMUNIKASI DATA (TI)komdat komdat.docxUnduhUnggah1
 
2006_GasProcessing_HB (1).pdf HYDROCARBON PROCESSING
2006_GasProcessing_HB (1).pdf HYDROCARBON PROCESSING2006_GasProcessing_HB (1).pdf HYDROCARBON PROCESSING
2006_GasProcessing_HB (1).pdf HYDROCARBON PROCESSINGmarianagonzalez07
 
Easter Eggs From Star Wars and in cars 1 and 2
Easter Eggs From Star Wars and in cars 1 and 2Easter Eggs From Star Wars and in cars 1 and 2
Easter Eggs From Star Wars and in cars 1 and 217djon017
 
办理学位证纽约大学毕业证(NYU毕业证书)原版一比一
办理学位证纽约大学毕业证(NYU毕业证书)原版一比一办理学位证纽约大学毕业证(NYU毕业证书)原版一比一
办理学位证纽约大学毕业证(NYU毕业证书)原版一比一fhwihughh
 

Recently uploaded (20)

EMERCE - 2024 - AMSTERDAM - CROSS-PLATFORM TRACKING WITH GOOGLE ANALYTICS.pptx
EMERCE - 2024 - AMSTERDAM - CROSS-PLATFORM  TRACKING WITH GOOGLE ANALYTICS.pptxEMERCE - 2024 - AMSTERDAM - CROSS-PLATFORM  TRACKING WITH GOOGLE ANALYTICS.pptx
EMERCE - 2024 - AMSTERDAM - CROSS-PLATFORM TRACKING WITH GOOGLE ANALYTICS.pptx
 
Identifying Appropriate Test Statistics Involving Population Mean
Identifying Appropriate Test Statistics Involving Population MeanIdentifying Appropriate Test Statistics Involving Population Mean
Identifying Appropriate Test Statistics Involving Population Mean
 
Call Girls In Dwarka 9654467111 Escorts Service
Call Girls In Dwarka 9654467111 Escorts ServiceCall Girls In Dwarka 9654467111 Escorts Service
Call Girls In Dwarka 9654467111 Escorts Service
 
Saket, (-DELHI )+91-9654467111-(=)CHEAP Call Girls in Escorts Service Saket C...
Saket, (-DELHI )+91-9654467111-(=)CHEAP Call Girls in Escorts Service Saket C...Saket, (-DELHI )+91-9654467111-(=)CHEAP Call Girls in Escorts Service Saket C...
Saket, (-DELHI )+91-9654467111-(=)CHEAP Call Girls in Escorts Service Saket C...
 
20240419 - Measurecamp Amsterdam - SAM.pdf
20240419 - Measurecamp Amsterdam - SAM.pdf20240419 - Measurecamp Amsterdam - SAM.pdf
20240419 - Measurecamp Amsterdam - SAM.pdf
 
ASML's Taxonomy Adventure by Daniel Canter
ASML's Taxonomy Adventure by Daniel CanterASML's Taxonomy Adventure by Daniel Canter
ASML's Taxonomy Adventure by Daniel Canter
 
From idea to production in a day – Leveraging Azure ML and Streamlit to build...
From idea to production in a day – Leveraging Azure ML and Streamlit to build...From idea to production in a day – Leveraging Azure ML and Streamlit to build...
From idea to production in a day – Leveraging Azure ML and Streamlit to build...
 
原版1:1定制南十字星大学毕业证(SCU毕业证)#文凭成绩单#真实留信学历认证永久存档
原版1:1定制南十字星大学毕业证(SCU毕业证)#文凭成绩单#真实留信学历认证永久存档原版1:1定制南十字星大学毕业证(SCU毕业证)#文凭成绩单#真实留信学历认证永久存档
原版1:1定制南十字星大学毕业证(SCU毕业证)#文凭成绩单#真实留信学历认证永久存档
 
Defining Constituents, Data Vizzes and Telling a Data Story
Defining Constituents, Data Vizzes and Telling a Data StoryDefining Constituents, Data Vizzes and Telling a Data Story
Defining Constituents, Data Vizzes and Telling a Data Story
 
Predicting Salary Using Data Science: A Comprehensive Analysis.pdf
Predicting Salary Using Data Science: A Comprehensive Analysis.pdfPredicting Salary Using Data Science: A Comprehensive Analysis.pdf
Predicting Salary Using Data Science: A Comprehensive Analysis.pdf
 
Call Us ➥97111√47426🤳Call Girls in Aerocity (Delhi NCR)
Call Us ➥97111√47426🤳Call Girls in Aerocity (Delhi NCR)Call Us ➥97111√47426🤳Call Girls in Aerocity (Delhi NCR)
Call Us ➥97111√47426🤳Call Girls in Aerocity (Delhi NCR)
 
Top 5 Best Data Analytics Courses In Queens
Top 5 Best Data Analytics Courses In QueensTop 5 Best Data Analytics Courses In Queens
Top 5 Best Data Analytics Courses In Queens
 
Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...
Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...
Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...
 
Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...
Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...
Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...
 
Deep Generative Learning for All - The Gen AI Hype (Spring 2024)
Deep Generative Learning for All - The Gen AI Hype (Spring 2024)Deep Generative Learning for All - The Gen AI Hype (Spring 2024)
Deep Generative Learning for All - The Gen AI Hype (Spring 2024)
 
Effects of Smartphone Addiction on the Academic Performances of Grades 9 to 1...
Effects of Smartphone Addiction on the Academic Performances of Grades 9 to 1...Effects of Smartphone Addiction on the Academic Performances of Grades 9 to 1...
Effects of Smartphone Addiction on the Academic Performances of Grades 9 to 1...
 
MK KOMUNIKASI DATA (TI)komdat komdat.docx
MK KOMUNIKASI DATA (TI)komdat komdat.docxMK KOMUNIKASI DATA (TI)komdat komdat.docx
MK KOMUNIKASI DATA (TI)komdat komdat.docx
 
2006_GasProcessing_HB (1).pdf HYDROCARBON PROCESSING
2006_GasProcessing_HB (1).pdf HYDROCARBON PROCESSING2006_GasProcessing_HB (1).pdf HYDROCARBON PROCESSING
2006_GasProcessing_HB (1).pdf HYDROCARBON PROCESSING
 
Easter Eggs From Star Wars and in cars 1 and 2
Easter Eggs From Star Wars and in cars 1 and 2Easter Eggs From Star Wars and in cars 1 and 2
Easter Eggs From Star Wars and in cars 1 and 2
 
办理学位证纽约大学毕业证(NYU毕业证书)原版一比一
办理学位证纽约大学毕业证(NYU毕业证书)原版一比一办理学位证纽约大学毕业证(NYU毕业证书)原版一比一
办理学位证纽约大学毕业证(NYU毕业证书)原版一比一
 

05 Scalar Visualization

  • 1. DS-620 Data Visualization Chapter 5. Summary Valerii Klymchuk June 26, 2015 0. EXERCISE 0 5 Scalar Visualization Scalar datasets, or scalar fields, represent functions f : D → R, where D is usually a subset of R2 or R3 . This chapter presents most popular scalar visualization techniques: color mapping, contouring, and height plots. 5.1 Color Mapping Color mapping associates a color with every scalar value. For every point of the domain if interest D, color mapping applies a function c : R → Colors that assigns to that point a color c(s) ∈ Colors which depends on the scalar value s at that point. There are two of the most common forms to define such a scalar-to-color function c: color look-up tables and transfer functions. Color look-up table C, also called a colormap, is a uniform sampling of the color-mapping function c: C = {ci}i=1..N , where ci = c (N − i)fmin + ifmax N . In practice equation above is implemented as a table of N colors c1, ..., cN , which are associated with the scalar dataset values f, assumed to be in the range [fmin, fmax]. The colors ci with low indices i in the colormap represent low scalar values close to fmin, whereas colors with indices close to N in the colormap represent high scalar values close to fmax. This works well, when we can determine this range in advance of the visualization. Imagine an application where we want to visualize a time-dependent scalar field f(t) with t ∈ [tmin, tmax]. If we do not know f(t) for all values t before we start the visualization, we cannot compute the absolute scalar range [fmin, fmax]. In such situations, a better solution might be to normalize the scalar range separately for every time frame f(t). This implies drawing different color legends for every time frame as well. Besides using a sampled scalar-to-color function to a discrete look-up table, one can also define the function c analytically, if desired. This is done by defining three scalar functions cR : R → R, cG : R → R, and cB : R → R, whereby c = (cR, cG, cB). The functions cR, cG and cB are also called transfer functions. In practice, one uses predefined look-up tables when there is no need to change the individual colors at runtime and transfer functions when the investigation goals require dynamically changing the color-mapping functions. 5.2 Designing Effective Colormaps A color-mapping visualization is effective if, by looking at the generated colors we can easily and accurately make statements about the original scalar dataset that was color-mapped. Different types of statements and analysis goals require different types of colormaps. Such goals include the following: 1. Absolute values: Tell the absolute data values at all points in the displayed dataset. 1
  • 2. 2. Value ordering: Given two points in the displayed dataset, tell which of the corresponding two data values is greater. 3. Value difference: Given two points in the displayed dataset, tell what is the difference of data values at these points. 4. Selected values: Given a particular data value finterest (or a compact interval of data values), tell which points in the displayed data take the respective value finterest. 5. Value change: Tell the speed of change, or first derivative of the data values at given points in the displayed dataset. Designing a colormap that achieves all above goals in equal measure is a challenging task. Color legends are required for any application of color mapping where we require to map a color to a data-related quantity. We must be able to mentally invert the color-mapping function c; that is, look at a color of some point in the visual domain DV and tell its scalar value f - we must know the color mapping function. In practice this is achieved by drawing a so-called color legend. This mechanism has some conditions succeed. First, the color-mapping function c must be invertible, this requires the function to be injective, meaning that every scalar value in the range [fmin, fmax] is associated with a unique color. It is not sufficient that the colors have different numerical values. We must also be able to easily perceive them visually as being different. Second, the spatial resolution of the visualized dataset must be high enough as compared to the speed of variation of the scalar data f. Rainbow colormap is a blue-to-red colormap, where blue suggests low values, whereas red suggests high values. These are one of the widest used colormap types, however the rainbow colormap has also several important limitations: • Focus: Perceptually, warm colors attract attention more than cold colors (attention attracted more towards the high data values.) This can be desirable, however if these may not be the values where we want to focus on. • Luminance: Luminances of the rainbow colormap entries vary non-monotonically. This leads to users being potentially attracted more to certain colors than to others, and/or perceiving certain color contrasts as being more important that others. • Context: Hues can have application-dependent semantics. Interpretation of the rainbow colormap as a heat might be a good encoding for temperature dataset, but this association is much harder or unnatural to do in case of a medical dataset. • Ordering: The rainbow colormap assumes that users can easily order hues from blue to green to yellow to red, however we cannot assume that every user will order hues in this particular manner. When this assumption does not hold, goal 2 is challenged. • Linearity: Besides the colormap invertibility requirement, visualization applications often also require a linearity constraint to be satisfied. Some users perceive colors to change “faster” per spatial unit in the higher yellow-to-red range than in the lower blue-to-cyan range. If a linear signal would be perceived by the user as nonlinear, with possibly undesirable consequences for goals 3 and 5. Other colormap designs. Grayscale: here we map data values f linearly to luminance, or gray value, with fmin corresponding to black and fmax corresponding to white. The grayscale colormap has several advantages. First, it directly encodes data into luminance, and thus it has no issues regarding the discrimination of different hues. Second, color ordering is natural (from dark to bright), which helps goal 2. Rendering grayscale images is less sensitive to color reproduction variability. However, telling differences between two gray values, or addressing goal 3, is harder than when using hue-based colormaps. Two-hue: can be seen as a generalization of the grayscale colormap, where we interpolate between two colors, rather than between black and white. If the two colors are perceptually quite different (differ both in hue and in perceived luminance), the resulting colormap allows an easy color ordering, and also produces a perceptually more linear result than the rainbow colormap. The two-hue colormap uses both hue and luminance differences of its two defining colors to create a higher dynamic range. However, used on a 3D shaded surface, the luminance perceived in the final visualization is ambiguous (doe to the colors or due to the shading of the scene.) A simple way to correct this issue is to use a corrected isoluminant two-hue 2
  • 3. colormap, where all entries have the same luminance, similar to the luminance-corrected rainbow colormap. The disadvantage of this design is that less colors can be individually perceived, leading to challenges for goals 1, 4, and 5. Heat map: The intuition behind colors of heated body colormap is that they represent the color of an object heated at increasing temperature values, with black corresponding to low data values, red-orange hues for intermediate data ranges, and yellow-white hues for the high data values respectively. The heat map uses a smaller set of hues that a rainbow colormap, but adds luminance as a way to order colors in an intuitive manner. However, its strong dependence on luminance makes the heat map less suitable for color coding data on 3D shaded surfaces. Diverging: (double-ended scale) colormaps are constructed starting from two typically isoluminant hues. However, we now add a third color cmid for the data value fmid = (fmin + fmax)/2 located in the middle of the considered data-range, and use two piecewise-linear interpolations between cmin and cmid, and between cmid and cmax. Additionally, cmid is chosen so that it has a considerably higher luminance than cmin and cmax. Diverging colormaps are good for situations where we want to emphasize the deviation of data values from the average value fmid, and effectively support the task of assessing value differences. Zebra colormap: In some applications, we want to emphasize the variations of the data rather than absolute data values (goal 5). This is useful when we are interested in detecting the dataset regions where data changes the most quickly or, stays constant. A solution is to use a colormap on the scalar dataset f containing two or more alternating colors that are perceptually very different. If we use a colormap that maps the value range to an alternating pattern of black and white colors, we obtain the zebra-like result shown in Figure 5.3(b). Thin, dense stripes indicate regions of high variation speed of the function, whereas the flat areas in the center and at the periphery of the image indicate regions of slower variation. WE can now also see better which are the regions where the data has similar rates of variation across our domain. Interpolation issues. Results of texture-based color mapping are better, since the texture-based method interpolates the scalar value s, and then applies the color mapping c separately at every rendered pixel. c(p) = c i siφi(p) . In contrast, the vertex-based color mapping c applies color mapping at vertices only, and then interpolates the mapped colors themselves. c(p) = i c(si)φi(p). A better solution is to store for each vertex pi the value si−smin smax−smin ,, i.e., our scalar values normalized in a range of [0, 1], as a texture coordinate. Next, we render our surface cells textured with 1D texture containing the colormap. 1D textures are described by a single texture coordinate and are stored as a 1D image, i.e., a vector of colored pixels. Texture-based color mapping produces reasonable results even for a sparsely sampled dataset. However, this technique requires rendering texture-based polygons, an option that may not be available on low-end graphics hardware or in situations when we want to use the available texture for other purposes. Color banding. Choosing a small number of colors N would inevitably lead to the color banding effect. It is usually not desirable, as it creates discrete visual artifacts in the rendered image that do not reflect any of the input data. Color banding can be avoided by increasing the number of colors in the colormap. The more quickly the data varies spatially, the smother the colormap has to be to avoid color banding. Additional issues. Selecting an optimal colormap is further influenced by a number of additional considerations, as follows: • Geometry: Not all colors are equally strongly perceived when displayed on surfaces that have the same area. Perceiving a color accurately is strongly influenced by the colors displayed in neighboring areas. Densely packed colors are not perceived separately under a certain spatial scale, but blended together by the human eye. This becomes an issue when the areas in question are very small, such as when color mapping a point cloud. 3
  • 4. • User group: 6 to 10 percent of all men would not be able to correctly separate red from green (or larger variety of hues). This and other forms of colorblindness should be taken into account when designing colormaps in critical applications intended to be used by a large public. • Medium: It is hard to design rich colormaps, containing more than a dozen colors that look the same on all the devices. A common mistake in practice is to use hue-based colormaps, such as rainbow colormap, and display the resulting visualization on luminance-based devices, such as black-and-white printed material. • Contrast: Some techniques work on the colors of an existing image: modifying the gray values, or colors in order to emphasize details of interest. Such techniques are discussed in the context of image processing. Designing effective colormap involves knowledge of the application domain conventions, typical data distribution, visualization goals, general perception theory, intended output devices, and the user preferences. There is no universally effective colormap. 5.3 Contouring Color banding is related to fundamental and widely used visualization techniques called contouring. Points located on such a color border, drawn in black in Figure 5.7(a), are called a contour line, or isoline. Formally, a contour line C is defined as all points p in a dataset D that have the same scalar value, or isovalue s(p) = x, or C(x) = {p ∈ D|s(p) = x}. We can actually combine the advantages of contours and color mapping by simply drawing contours for all values of interest on top of a color-mapped image that uses a rich colormap. Besides indicating points where the data has specific values, contours can also tell us something about the data variation itself. Areas where contours are closer to each other, indicate higher variations of the scalar data. The scalar distance between consecutive contours (which is constant) divided by the spatial distance between the same contours (which is not constant) is exactly the derivative of the scalar signal. Contour properties. First, isolines can be either closed curves, they never stop inside the dataset itself - they either close upon themselves, or stop when reaching the dataset border. Second, isoline never intersects (crosses) itself, nor does it intersect an isoline for another scalar value. Contours are perpendicular to the gradient of the contoured function. The gradient is the direction of the function’s maximal variation, whereas contours are points of equal function value, so the tangent to a contour is the direction of the function’s minimal (zero) variation. Computing contours. Since dataset mathcalD is defined as a set of cells carrying node or cell scalar data, plus additional basis functions, being an intersection of the function graph with a horizontal plane, the isoline of such dataset is piecewise linear and has topological dimension 1, this is a polyline. For every edge e = (pi, pj) of the cell c in dataset we test whether the isoline value v is between the scalar vertex attributes vi and vj corresponding to the edge end points pi and pj. If the test succeeds, the isoline intersects e at a point q = pi(vj − v) + pj(v − vi) vj − vi . We repeat the previous procedure for all edges of our current cell and finally obtain a set of intersection point S = {qi} of the isoline with the cell. Next, we must connect these points together to obtain the actual isoline. Since we know that the isoline is piecewise linear over a cell, we can use line segments to connect these points. If S contains more than 2 points, as illustrated in Figure 5.11 for the quad cell which has 4 intersection points, there are two possibilities for connecting the four intersection points: creating two separate loops, or creating a single contour loop. When we use triangular mesh, such ambiguities do not exist, because it is shifted now to the quad-splitting process: splitting diagonally from the lower-left to the upper-right, or from the upper-left to the lower-right vertex. Contouring needs to have at least piecewise linear, C0 dataset as input. This means that we cannot directly contour image data, for example, which is actually a uniform grid with piecewise constant interpo- lation, and thus represents a discontinuous function. Resampling image datasets to piecewise linear datasets 4
  • 5. can be done, however this has the hidden effect of changing the continuity assumptions on the dataset from piecewise constant to piecewise linear. While this should not pose problems in many cases, there are situations when such changes can lead to highly incorrect visualizations. The most popular methods that reduce the number of operations done per cell, is the marching squares method, which works on 2D datasets with quad cells, and its marching cubes variant which works on 3D datasets with hexahedral cells. 5.3.1 Marching Squares The method begins by determining the topological state of the current cell with respect to the isovalue. A quad cell has, thus, 24 = 16 different topological states. The state of a quad cell can be represented by a 4-bit integer index, where each bit stores the inside/outside state of a vertex. This integer can be used to index a case table (see figure 5.12. The marching squares algorithm constructs independent line segments for each cell, which are stored in an unstructured dataset type, given that isolines have no regular structure with respect to the grid they are computed on. A useful post processing step is to merge the coincident end points of line segments originating from neighbor grid cells that share an edge. Besides decreasing the isoline dataset size, this also creates a dataset on which operations such as computing vertex data from cell data via averaging is possible. 5.3.2 Marching Cubes The algorithm accepts 3D instead of 2D scalar datasets and generates 2D isosurfaces instead of 1D isolines. Since a hex cell has 28 = 256 different topological cases, in practice, this number is reduced to only 15 by using symmetry considerations. Marching cubes generates a set of polygons for each contoured cell, which includes triangles, quads, pentagons, and hexagons. Resulting 3D isosurface is saved as an unstructured dataset. An additional step needed for marching cubes is the computation of isosurface normals. These are needed for smooth shading. Normals can be computed by normal averaging. Alternatively, since isosurfaces are orthogonal to the scalar data gradient at each point, normals can be directly computed as being the normalized gradients of the scalar data. As a general rule, most isosurface details that are under or around the size of the resolution of the isosurfaced dataset can be either actual data or artifacts, and should be interpreted with great care. We can say, that the slicing and contouring operations are commutative. Marching algorithm variations. There exist similar algorithms to marching cubes for all cell types, such as lines, triangles, and tetrahedra. These algorithms can treat all grid types, as long as all encountered cell types supported. All variants produce unstructured grids. Isosurfaces can be also generated and rendered using point-based techniques. Dividing cubes algorithm. A classical algorithm for generating and rendering isosurfaces using clouds is dividing cubes. It works for 3D uniform and rectilinear grids (box shaped cells). Dividing cubes approx- imates the isosurface using constant basis functions, whereas marching cubes uses linear basis functions. For dividing cubes, the ideal minimal cell size is that of a screen pixel, in which case every point primitive is of pixel size and there are no visual gaps or other rendering artifacts in the rendered isosurface. 5.4 Height Plots Height plots, also called elevation or carpet plots can be described by the mapping operation, given a two-dimensional surface Ds ∈ D, part of a scalar dataset D: m : D∫ → D, m(x) = x + s(x)n(x), ∀x ∈ D∫ , where s(x) is the scalar value of D at the point x and n(x) is the normal to the surface D∫ at x. The hight plot mapping operation “warps” a given surface D∫ included in the dataset along the surface normal, with a factor proportional to the scalar value. Most common variant of height plots warps a planar surface D∫ . However, we can produce height plots starting from different basis surfaces Ds, such as torus, shown in Figure 5.18. In this image, both the height and the color encode the scalar value: the two visual cues straighten each other to convey the scalar value 5
  • 6. information. If desired, one can encode two different scalar values with a height plot, one into the plot’s height and the other into the plot’s color. 5.4.1 Enridged Plots The height plots, color mapping, and contouring techniques have their own strengths and limitations: • Height plots are easy, to learn, intuitive, generate continuous images, and show the local gradient of data. However, quantitative information can be hard to extract from such plots, it is not easy to tell which peak is the highest. Also, 3D occlusion effects can occur. • Color mapping share advantages of height plots and do not suffer from 3D occlusion problems. How- ever, making quantitative judgments based on color data can be hard, and requires carefully designed colormaps. • Contour plots are effective in communication precise quantitative values. However, such plots are less intuitive to use. Also, they do not create a dense, continuous, image. Combining height plots, contour plots, and color mapping alleviates some but not all, of the problems of each plot type. An alternative solution is proposed by ** enridged contour maps.** The idea is to use a non-linear mapping given by z(x, y) = sf(x, y) + sh g f(x, y)mod h h , instead of the linear mapping z(x, y) = sf(x, y), where s > 0 is the plot’s scaling factor, g(u) = au(1 − u) is a parabolic function. Effect is to add parabolic bumps of height a to consecutive intervals in the range of f of size h. Enridged plots combine the appearance of contour plots and height plots. The nested cushion-like shapes that emerge in this type of plot convey a sensation of height which is much stronger than in classical height plots. Enridged plots can be also extended to use a hierarchy of contour levels, by using two instances of parameter-pairs (h1, a1) (larger intervals) and (h2, a2) (denser contours) such that a2 < a1 and h1 = kh2. Enridged plots can be further tuned to emphasize the nesting relationship between different regions. For this we simply replace the symmetric parabolic profile g(u) = u(1−u) by an asymmetric one g(u) = u(1−u)n , which increases sharply close to u = 0 and is relatively flat close to u = 1. The image now resembles the structure of Venn-Euler diagram. These structures can be useful for visualizing data hierarchies in different contexts, beyond scalar fields. 5.5 Conclusion We presented a number of fundamental methods for visualizing scalar data: color mapping, contouring, slicing, and height plots. Color mapping assigns a color as a function of the scalar value at each point of a given domain. Contouring displays all points within a given two- or three-dimensional domain that have a given scalar value. Height plots deform the scalar dataset domain in a given direction as a function of the scalar data. The main advantages of these techniques are that they produce intuitive results, easily understood by users, and they are simple to implement. However, such techniques also have s number of restrictions. What have you learned in this chapter? This chapter presents most popular scalar visualization techniques: color mapping, contouring, and height plots. It talks about design of effective colormaps: important design decisions pertaining to the construction of colormaps, contouring in two and three dimensions. Marching squares and marching cubes algorithms are well explained; strengths and limitations of standard techniques are outlined. What surprised you the most? Specific colormaps for categorical datasets have been designed. An excellent online resource that allows choosing a categorical colormap based on data properties and task types is ColorBrewer. It is also surprising to know how we can draw more than a single isosurface of the same dataset in one visualization. The process 6
  • 7. of rendering several nested semitransparent isosurfaces that correspond to a discrete sequence of isovalues can be generalized to the continuous case, as we shall see with the volume rendering technique in Chapter 10. What applications not mentioned in the book you could imagine for the techniques ex- plained in this chapter? Perhaps, we can visualize probabilities as height plots or scalar fields with isolines and colors mapped to the scalar value. Besides a visual clue, those images (datasets) can also suggest shortest paths from, i.e., arbitrary points A and B situated on top of the surface of the same plot. 1. EXERCISE 1 Consider the simple scalar visualization of a 2D slice from a human head scan shown in the figure below. Here, six different colormaps are used. Explain, for each of the sub-figures, what are the advantages (if any) and disadvantages (if any) of the respective colormap. Figure 1: Scalar visualization of a 2D slice dataset using six differentcolormaps (see chapter 5). Hints: Consider what types of structures you can easily see in one visualization as compared to another visualization. Table 1: Advantages and disadvantages of different colormaps. Colormap type Advantages Disadvantages rainbow (a) We can relatively easy distinguish the harder tissues (color mapped to yellow, or- ange, and red), and the air (color mapped to dark blue) Focus: warm colors attract more attention that the cold colors. Depending on applica- tion, these may not be the values where we want to focus on. Linearity: Colors change faster per spatial unit in the higher yellow- to-red range than in the lower blue-to-cyan range. Linear signal would be perceived as nonlinear. 7
  • 8. Continuation of Table 1 Colormap type Advantages Disadvantages grayscale (b) Grayscale directly encodes data into lumi- nance, and thus it has no issues regarding the discrimination of different hues. Col- ormaping is natural (from dark to bright), which helps goal 2. Rendering grayscale images is less sensitive to color reproduc- tion variability issues on a range of devices. Focus: warm colors attract more attention that the cold colors. Depending on applica- tion, these ma not be the values where we want to focus on. Linearity: Colors change faster per spatial unit in the higher yellow- to-red range than in the lower blue-to-cyan range. Linear signal would be perceived as nonlinear. two-color (c) If two colors are perceptually quite differ- ent in hue but also in luminance, the result- ing colormap allows an easy color order- ing, and produces more linear result than the rainbow colormap. In case none of two colors is too bright, it is suitable for non- compact data displayed on a white back- ground. It arguably offers less dynamic range. While using this colormap on 3D shaded surface, the luminance perceived in the fi- nal visualization is ambiguous. heat map (d) Compared to rainbow colormap it uses a smaller set of hues, but adds luminance as a way to order colors in an intuitive man- ner. Allows one to discriminate between more data values, than the two-hue col- ormap, since it uses more hues. Its strong dependence on luminance makes it less suitable for color coding data on 3D shaded surfaces, just as the two-hue non- isoluminant colormaps. diverging (e) This can be interpreted as a temperature colormap, where the average value (white) is considered neutral, and we want to em- phasize points which are colder, respec- tively warmer, than this value. * diverging (f) Good for situations where we want to em- phasize the deviation of data values from the average value fmid, and also effectively support the task of assessing value differ- ences. Since the maximum luminance of this colormap is lower than that of pure white, we can use this colormap also on non-compact data domains. More hues than in two-hue colormaps allow one to dis- criminate between more data values. * End of Table 2. EXERCISE 2 Consider an application where we use color-mapping to visualize a scalar field defined on a simple 2D uniform grid consisting of quad cells. The scalar field values, recorded at the cell vertices, are in the range [s {min},s {max}]. Next to the color-mapped grid, we show, as usual, a color legend, where the two extreme colors (at the endpoints of the color legend) correspond to the values smin and smax, respectively. Given this set-up, can we say for sure that any color shown in the color legend will appear in at least one point (pixel) of the color-mapped grid? If so, argue why. If not, detail at least one situation when this will not occur. Yes, it may happen in case in isoluminant clormaps or in case of texture-based colormaps, which produce pixel-accurate renderings - reconstructions of our given sampled signal. Scalar fields produce such legends too. It also depends on color interpolations done by graphics hardware: most widespread vertex-based 8
  • 9. color-mappings produce suboptimal results for dataset regions where the data varies too quickly. No, in case if resolution of the output image is low which affects our visualization in a way, that non-linear hues on color legend start being perceived as misleading. Images that exhibit color banding can have fewer hues than the color legend, displayed as continuous strip. 3. EXERCISE 3 A simple, though not perfect, way to create 2D contours or isolines, is to use a so-called delta colormap. Given a scalar contour-value (or isovalue) a, and a dataset with the scalars in the range [m, M], a delta colormap maps all scalar values in the ranges [m, a − d] and [a + d, M] to one color, and the values in the range [a − d, a + d] to another color. Here, d is typically very small as compared to the scalar range M − m. Consider now the application of this delta colormap to a scalar field stored on a simple uniform 2D grid having quad cells, as compared to drawing the equivalent isoline on the same grid. Detail at least two drawbacks of the delta colormap as compared to the isoline visualization. Next, explain which parameters we could fine-tune (e.g., value of d, sampling rate of the grid, color-interpolation techniques used on the grid cells, or other parameters) in order to improve the quality of the delta-colormap- based visualization. Drawbacks of applying delta colormap to the scalar field on uniform 2D grid with quad cells: • Delta colormap on a scalar field does not draw the isoline itself, it highlights a sub-cloud of points, which are close in terms of their scalar values. This does not tell us much about topology. • Some bright hues can appear invisible on while backgrounds, causing a loss of visual information depicted. It is better to use isoluminant colors for delta colormap. It is also good to keep parameter d adjusted to sampling resolution of the grid to help marching algorithms. Contouring needs to have at least piecewise linear datasets as input. We can not directly contour the image data, when scalar field represents piecewise constant interpolation of a discontinuous function. Resampling to piecewise-linear dataset and changing the continuity assumptions on the dataset from piecewise-constant to piecewise-linear can lead to incorrect visualizations. 4. EXERCISE 4 Contouring can be implemented by various algorithms. In this context, the marching cubes algorithm (and its variations such as marching squares or marching tetrahedra) propose several optimizations. Com- pared to a ‘naive’ implementation of contouring, that does not use the marching technique, what are the main computational advantages of the marching technique? In naive implementation for each cell we must test whether the isovalue intersects every cell edge, and, if so, compute the exact intersection location. * marching algorithms reduces number of operations done per cell * computes edge-isovalue intersections only on those edges that are known to be intersected for a specific topological state. * besides decreasing the isoline dataset size, unstructured dataset produced supports operations such as computing vertex data from cell data via averaging. 5. EXERCISE 5 Consider a 2D constant scalar field, f(x, y) = k, for all values of (x, y). What will be the result of applying the marching squares technique on this field for an isovalue equal to k? Does the result depend on the cell type or dimension – for instance, do we obtain a different result if we used marching triangles on a 2D unstructured triangle-grid rather than marching squares on a 2D quad grid? Hints: Consider the vertex-coding performed by the marching algorithms. Marching squares will produce a contour line, which crosses each edges of the cells in our dataset. Marching triangles have no ambiguous cases, but they will also produce the same result as marching squares. Isolines have no structure with respect to the grid they are computed on. 6. EXERCISE 6 9
  • 10. An isoline, as computed by marching squares, is represented by a 2D polyline, or set of line-segments. Can two such line segments from the same isoline intersect (we do not consider segments that share endpoints to intersect). Can two such line segments from different isolines of the same scalar field intersect? If your answer is yes, sketch the situation by showing the quad grid, vertex values, isoline segments, and their intersection. If your answer is no, argue it based on the implementation details of marching squares. An isoline never intersects (crosses) itself, nor does it intersect an isoline for another scalar value. 7. EXERCISE 7 Consider a two-variable scalar function z = f(x, y) which is differentiable everywhere. Figure 5.9 (also displayed below) shows that the gradient of the function is a vector which is locally orthogonal to the contours (or isolines) of the function. Give a mathematical proof of this. Hints: Consider the tangent vector to the contour at a point (x, y), and express the variation (increase or decrease) of the function’s value in a small neighborhood around the respective point. Whereas contours are points of equal function value, so the tangent to a contour is the direction of the function’s minimal (zero) variation: ∆f = 0. The gradient of the function is the direction of the maximal variation. 8. EXERCISE 8 Contouring and slicing are commutative operation, in the sense that slicing a 3D isosurface with a plane yields the same result as contouring the 2D data-slice obtained by slicing the input 3D scalar volume with the same plane. Consider now the set of 2D contours obtained from a 3D scalar volume by applying the 2D marching squares algorithm on a set of closely and uniformly spaced 2D data-slices extracted from that volume. Describe (in as much detail as possible) how you would use this set of 2D contours to reconstruct the corresponding 3D isosurface. If the set is dense, i.e., the slice panels are close to each other, and the dataset does not exhibit to-sharp variations, we can connect points on the isolines from the previous and next slice and construct the 3D isosurface. 9. EXERCISE 9 Consider an application in which the user extracts an isosurface, and would like to visualize how thick the resulting shape is, using e.g. color mapping. That is, each vertex of isosurface’s unstructured mesh should be assigned a (positive) scalar value indicating how thick the shape is at that location. For this problem: • Propose a definition of shape thickness that would match our intuitive understanding (you can support your explanation by drawings, if needed) • Propose an algorithm to compute the above definition, given an isosurface stored as an unstructured mesh • Discuss the computational complexity of your proposed algorithm • Discuss the possible challenges of your proposal (e.g., configurations or shapes for which your definition would not deliver the expected result, and/or the algorithm would not work correctly with respect to your definition). Hints: • Consider first the related problem of computing the thickness of a 2D contour, before moving to the 3D case. • Consider first that the contour is closed, before treating the case when it is open because it reaches the boundary of the dataset. Thickness of a contour generally differs depending on the direction of measurement. I propose to define the thickness in the direction of inverted normal. At each sample point pi we compute a distance d between a sample point itself and the point e where inverted normal n exits a closed contour. 10
  • 11. The length of a vector between pi and e can be derived from pi − d × n = e. Once we detected the point e, we can compute d = ||pi − e|| and store the value at the sample point pi. Vertex normal values can be computed as the area-weighted average of the polygon normals for all the cells that use a given vertex using formula: nj = A(Cj)n(Ci) A(Ci) , where A(Ci) is the area of the cell Ci, n is a cell’s normal. Implementing an unstructured grid implies storing coordinates of a collection of all sample points {pj} and then storing all vertex indices for all cells {ci}. These indices are usually stored as integer offsets in the grid sample point list {pj}. In order to calculate coordinate-vector e we use the following steps: • ∀ cells ci where pj is a vertex we compute normals and compute vertex normal nj by averaging • for each vertex and its normal we cast a ray in the direction of inverse normal, starting a loop with a small number d >= 0 and increasing d with each iteration, until point-vector e = pj − d × nj appears to belong to at least one of the cells in our unstructured contour mesh. • once we have obtained such a scalar d, we store vector-point e in the vertex pj. We can always easily compute and use ||e|| as a measure of contour’s thickness in the direction of the normal. 10. EXERCISE 10 One of the challenges of practical use of isosurfaces in 3D is selecting an ‘interesting’ scalar value, for which one obtains an ‘interesting’ isosurface. We cannot solve this challenge in general, since different applications may have different definition of what an interesting isosurface is. However, consider an application where you want to automatically select a small number of isovalues (say, 5) and display these to give a quick overview of the data changes. How would you automatically select these isovalues? Hints: Define, first, what you consider to be the interesting structures in the data, e.g., areas of rapid scalar variation, or areas where an isosurface changes topology). I would pick 5 numbers automatically by dividing scalar range = [min, max] with isovalues as follows: i1 = min+1/3 range. Then, I would pick second isovalue inside the largest interval [i1, max] as i2 = i1 +1/3· (max−i1). Third isovalue I would pick as i3 = i2+1/3 (max−i2)], forth and fifth as i4 = max−2/3 (max−i3) and i5 = min + 2/3 (i1 − min) accordigly. 11