This document provides an overview of data mining techniques discussed in Chapter 3, including parametric and nonparametric models, statistical perspectives on point estimation and error measurement, Bayes' theorem, decision trees, neural networks, genetic algorithms, and similarity measures. Nonparametric techniques like neural networks, decision trees, and genetic algorithms are particularly suitable for data mining applications involving large, dynamically changing datasets.
2. 3.1 Introduction
• Parametric models describe the relationship between input and output
through the use of algebraic equations what are some parameters are not
specified. These unspecified parameters are determined by providing input
examples.
• Nonparametric techniques are more appropriate for data mining
applications. A non-parametric model is one that is data-driven. Recent
techniques are able to learn dynamically as data are added to the input.
This dynamic Learning process allows the model to be created
continuously. The more data, the better the model.
• Nonparametric techniques are particularly suitable to the database
applications with large amounts of dynamically changing data.
Nonparametric techniques include neural networks, decision trees, and
genetic algorithms.
3. 3.2 Statistical Perspective. Point Estimation
• The bias of an estimator is the difference between the expected value of the estimator and the actual value. Let 𝐸 Θ denote the
expected value
𝐵𝑖𝑎𝑠 = 𝐸 Θ − Θ = 𝐸 Θ − Θ
• One measure of the effectiveness of an estimate is the mean squared error (MSE), which is the expected value of difference
between the estimates and the actual value:
𝐸 Θ − Θ 2
• The root mean square error (RMSE) is found by taking the square root of the MSE.
• The root mean square (RMS) may also be used to estimate error or as another statistic to describe a distribution. Unlike mean, it
does indicate the magnitude of the values.
𝑅𝑀𝑆 =
𝑗=1
𝑛
𝑥𝑗
2
𝑛
• At popular estimating technique is the jackknife estimate. With this approach, the estimate of a parameter, Θ, is obtained by
omitting one value from the set of observed values. Given set of jackknife estimates, Θ 𝑖 we can obtain an overall estimate
Θ . =
𝑖=1
𝑛
Θ(𝑖)
𝑛
• When we determine a range of values, within which the true parameter value should fall. This range is called a confidence interval.
4. 3.2.2 Estimation and Summarization Models
• Maximum likelihood estimate (MLE) technique for point estimation. The approach obtains parameter estimates that maximize the
probability that that sample data 𝑋 = 𝑥𝑖, … , 𝑥 𝑛 occur for the specific model 𝑓 𝑥𝑖 Θ . The likelihood function is thus defined as
𝐿 Θ 𝑥𝑖, … , 𝑥 𝑛 =
𝑖=1
𝑛
𝑓(𝑥𝑖|Θ) .
The value Θ that maximizes 𝐿 is the estimate chosen. This can be found by taking the derivative with respect to Θ.
• The expectation maximization (EM) algorithm can solve the estimation problem with incomplete data. The EM algorithm finds an
MLE for a parameter (such as a mean) using a two step process: estimation and maximization. These steps are applied iteratively
until successive parameter estimates converge. Such iterative estimates must satisfy
𝜕𝑙𝑛𝐿(Θ|𝑋)
𝜕𝜃𝑖
= 0
• Models based on summarization provide an abstraction and the summarization of the data as a whole. Well-known statistical
concepts such as mean, variance, standard deviation, median, mode are simple models of the underlying population. Fitting
population into a specific frequency distribution provides an even better model of the data.
• Visualization techniques help to display the structure of the data graphically (histograms, box plots, scatter diagrams).
5. 3.2.3 Bayes Theorem
• Bayes rule is a technique to estimate the likelihood of a property given the set of data as evidence or input.
Suppose that either hypothesis ℎ1 or hypothesis ℎ2 must occur and 𝑥𝑖 is an observable event, the Bayes rule
states
𝑃 ℎ1 𝑥𝑖 =
𝑃 𝑥𝑖|ℎ1 𝑃 ℎ1
𝑃 𝑥𝑖|ℎ1 𝑃 ℎ1 + 𝑃 𝑥𝑖|ℎ2 𝑃 ℎ2
• 𝑃 ℎ1 𝑥𝑖 is called the posterior probability, while 𝑃 ℎ1 is the prior probability associated with hypothesis ℎ1. 𝑃 𝑥𝑖
is the probability of the occurrence of data value 𝑥𝑖 and 𝑃 𝑥𝑖|ℎ1 is the conditional probability that, given a
hypothesis the tuple satisfies it. Bayes rule allows to assign probabilities 𝑃 ℎ𝑗 𝑥𝑖 of hypotheses given a data value
𝑃 ℎ1 𝑥𝑖 =
𝑃 𝑥𝑖|ℎ1 𝑃 ℎ1
𝑃 𝑥𝑖
• Hypothesis testing helps to determine if a set of observed variable values is statistically significant (differs
from the expected case). This approach explains the observed data by testing a hypothesis against it. A
hypothesis is first made, then the observed values are compared based on this hypothesis to those of the
expected case. Assuming that 𝑂 represents the observed data and 𝐸 is the expected values based on
hypothesis, the chi-squared statistic, 𝜒2
, is defined as:
𝜒2 =
𝑂 − 𝐸 2
𝐸
6. 3.2.5 Correlations and Regression
• Linear regression assumes that a linear relationship exists between the input and the output data.
The common formula for a linear relationship is:
𝑦 = 𝑐0 + 𝑐1 𝑥1 + ⋯ + 𝑐 𝑛 𝑥 𝑛
• There are: 𝑛 input variables, which are called predictors or regressors; one output variable being
predicted (called a response); 𝑛+1 constants, which are chosen to match model by the input
sample. This is called multiple linear regression because there is more than one predictor.
• Both bivariate regression and correlation can be used to evaluate the strength of a relationship
between two variables.
• One standard formula to measure linear correlation is the correlation coefficient 𝑟 ∈ −1,1 . Here
negative correlation indicates that one variable increases while the other decreases:
𝑟 =
(𝑥𝑖 − 𝑋)(𝑦𝑖 − 𝑌)
(𝑥𝑖 − 𝑋)2 (𝑦𝑖 − 𝑌)2
• When two data variables have a strong correlation, they are similar. Thus, the correlation
coefficient can be used to define similarity for clustering or classification.
7. 3.3 Similarity Measures
Those tuples, that answer the query should be more like each other than those that do not answer it. Each IR query provides the class
definition in the form of the IR query itself. So classification problem then becomes one of determining similarity between each tuple
and the query 𝑂 𝑛 rather than 𝑂 𝑛2
problem. Common similarity measures used:
• Dice 𝑠𝑖𝑚 𝑡𝑖, 𝑡𝑗 =
2 ℎ=1
𝑘
𝑡 𝑖ℎ 𝑡 𝑗ℎ
ℎ=1
𝑘 𝑡 𝑖ℎ
2 + ℎ=1
𝑘 𝑡 𝑗ℎ
2 relates the overlap to the average size of the two sets together
• Jaccard 𝑠𝑖𝑚 𝑡𝑖, 𝑡𝑗 = ℎ=1
𝑘
𝑡 𝑖ℎ 𝑡 𝑗ℎ
ℎ=1
𝑘 𝑡 𝑖ℎ
2 + ℎ=1
𝑘 𝑡 𝑗ℎ
2 − ℎ=1
𝑘 𝑡 𝑖ℎ 𝑡 𝑗ℎ
measures overlap of two sets as related to the whole set caused by their union
• Cosine 𝑠𝑖𝑚 𝑡𝑖, 𝑡𝑗 = ℎ=1
𝑘
𝑡 𝑖ℎ 𝑡 𝑗ℎ
ℎ=1
𝑘 𝑡 𝑖ℎ
2
ℎ=1
𝑘 𝑡 𝑗ℎ
2
relates the overlap to the geometric average of the two sets
• Overlap 𝑠𝑖𝑚 𝑡𝑖, 𝑡𝑗 = ℎ=1
𝑘
𝑡 𝑖ℎ 𝑡 𝑗ℎ
min ℎ=1
𝑘 𝑡 𝑖ℎ
2 , ℎ=1
𝑘 𝑡 𝑗ℎ
2
determines the degree to which two sets overlap
Distance or dissimilarity measure are often used instead of similarity measures. These measure how unlike items are.
• Euclidean 𝑑𝑖𝑠 𝑡𝑖, 𝑡𝑗 = ℎ=1
𝑘
(𝑡𝑖ℎ − 𝑡𝑗ℎ)2
• Manhattan 𝑑𝑖𝑠 𝑡𝑖, 𝑡𝑗 = ℎ=1
𝑘
(𝑡𝑖ℎ−𝑡𝑗ℎ)
Since most similarity measures assume numeric (and often discrete) values, they may be difficult to use for general data types. A
mapping from the attribute domain to a subset of integers may be used and some approach to determining the difference is needed.
8. 3.4 Decision Trees
A decision tree (DT) is a predictive modeling technique used in classification,
clustering, and prediction. A computational DT model consists of three steps:
• A decision tree
• An algorithm to create the tree
• An algorithm that applies the tree to data and solves the problem under
consideration (complexity depends on the product of the number of levels
and the maximum branching factor).
Most decision tree techniques differ in how the tree is created. An algorithm
examines data from a training sample with known classification values in
order to build the tree, or it could be constructed by a domain expert.
9. 3.5 Neural Networks
• The NN can be viewed as directed graph 𝐹 = 𝑉, 𝐴
consisting of vertices and arcs. All the vertices are
partitioned into source(input), sink (output), and
internal (hidden) nodes; every arch 𝑖, 𝑗 is labeled
with a numeric value 𝑤𝑖𝑗; every node 𝑖 is labeled with
a function 𝑓𝑖. The NN as an information processing
system consists of a directed graph and various
algorithms that access the graph.
• NN usually works only with numeric data.
• Artificial NN can be classified based on the type of
connectivity and learning into feed-forward or
feedback, with supervised or unsupervised learning.
• Unlike decision trees, after a tuple is processed, the
NN may be changed to improve future performance.
• NN have a long training time and thus are not
appropriate for real-world applications. NN can be
used in massively parallel systems.
10. Activation Functions
The output of each node 𝑖 in the NN is based on the definition of an activation function 𝑓𝑖,
associated with it. An activation 𝑓𝑖is applied to the input values 𝑥1𝑖, ⋯ , 𝑥 𝑘𝑖 and weights
𝑤1𝑖, ⋯ , 𝑤 𝑘𝑖 . The inputs are usually combined in a sum of products form 𝑆 = 𝑤ℎ𝑖 𝑥ℎ𝑖 .
The following are alternative definitions for activation function 𝑓𝑖 𝑆 at node 𝑖:
• Linear: 𝑓𝑖 𝑆 = 𝑐𝑆
• Threshold or step: 𝑓𝑖 𝑆 =
1 𝑖𝑓 𝑆 > 𝑇
0 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
• Sigmoid: 𝑓𝑖 𝑆 =
1
(1+𝑒−𝑐𝑆)
. This function possesses a simple derivative
𝜕𝑓 𝑖
𝜕𝑆
= 𝑓𝑖 1 − 𝑓𝑖
• Hyperbolic tangent: 𝑓𝑖 𝑆 =
(1−𝑒−𝑆)
(1+𝑒−𝑐𝑆)
• Gaussian: 𝑓𝑖 𝑆 = 𝑒
−𝑆2
𝑣
11.
12. 3.6 Genetic Algorithms
• Initially, a population of individuals 𝑃 is created. They typically are generated randomly. From this
population, a new population 𝑃/ of the same size is created. The algorithm repeatedly selects
individuals from whom to create new ones. These parents (𝑖1, 𝑖2), are then used to produce
offspring or children (𝑜1, 𝑜2) using a crossover process. Then mutants may be generated. The
process continues until the new population satisfies the termination condition.
• A fitness function 𝑓 is used to determine the best individuals in a population. This is then used in
selection process to chose parents to keep. Given an objective by which the population can be
measured, the fitness function indicates how well the goodness objective is being met by an
individual.
• The simplest selections process is to select individuals based on their fitness. Here 𝑝𝐼 𝑖
is the
probability of selecting individual 𝐼𝑖. This type of selection is called roulette wheel selection.
𝑝𝐼 𝑖
=
𝑓(𝐼𝑖)
𝐼 𝑖∈𝑃 𝑓(𝐼𝑗)
• A genetic algorithm (GA) is computational model consisting of five parts: 1) starting set, 2)
crossover technique, 3) mutation algorithm, 4) fitness function, 5) GA algorithm.