SlideShare a Scribd company logo
1 of 77
Download to read offline
Prof. Pier Luca Lanzi
Representative-Based Clustering
Data Mining andText Mining (UIC 583 @ Politecnico di Milano)
Prof. Pier Luca Lanzi
Readings
• Mining of Massive Datasets (Chapter 7)
• Data Mining and Analysis (Section 13.3)
2
Prof. Pier Luca Lanzi
How can we represent clusters?
Prof. Pier Luca Lanzi
Representation-Based Algorithms
• Given a dataset of N instances, and a desired number of clusters
k, this class of algorithms generates a partition C of N in k clusters
{C1, C2, …, Ck}
• For each cluster there is a point that summarizes the cluster
• The common choice being the mean of the points in the cluster
where ni = |Ci| and Îźi is the centroid
4
Prof. Pier Luca Lanzi
Representation-Based Algorithms
• The goal of the clustering process is to select the best partition according to
some scoring function
• Sum of squared errors is the most common scoring function
• The goal of the clustering process is thus to find
• Brute-force Approach
§ Generate all the possible clustering C = {C1, C2, …, Ck} and select the
best one. Unfortunately, there are O(kN/k!) possible partitions
5
Prof. Pier Luca Lanzi
k-Means Algorithm
• Most widely known representative-based algorithm
• Assumes an Euclidean space but can be easily extended to the
non-Euclidean case
• Employs a greedy iterative approaches that minimizes the SSE
objective. Accordingly it can converge to a local optimal instead
of a globally optimal clustering.
6
Prof. Pier Luca Lanzi
1. Initially choose k points that are
likely to be in different clusters;
2. Make these points the centroids of
their clusters;
3. FOR each remaining point p DO
Find the centroid to which p is closest;
Add p to the cluster of that centroid;
Adjust the centroid of that
cluster to account for p;
END;
Prof. Pier Luca Lanzi
Prof. Pier Luca Lanzi
Prof. Pier Luca Lanzi
Prof. Pier Luca Lanzi
Prof. Pier Luca Lanzi
Prof. Pier Luca Lanzi
Prof. Pier Luca Lanzi
Prof. Pier Luca Lanzi
Prof. Pier Luca Lanzi
Prof. Pier Luca Lanzi
Prof. Pier Luca Lanzi
Prof. Pier Luca Lanzi
Prof. Pier Luca Lanzi
Prof. Pier Luca Lanzi
Prof. Pier Luca Lanzi
Prof. Pier Luca Lanzi
Initializing Clusters
• Solution 1
§Pick points that are as far away from one another as possible.
• Variation of solution 1
Pick the first point at random;
WHILE there are fewer than k points DO
Add the point whose minimum distance
from the selected points is as large as
possible;
END;
• Solution 2
§Cluster a sample of the data, perhaps hierarchically, so there
are k clusters. Pick a point from each cluster, perhaps that
point closest to the centroid of the cluster.
23
Prof. Pier Luca Lanzi
Two different K-means Clusterings 24
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
3
x
y
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
3
x
y
Sub-optimal Clustering
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
3
x
y
Optimal Clustering
Original Points
Prof. Pier Luca Lanzi
Importance of Choosing the Initial
Centroids
25
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
3
x
y
Iteration 1
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
3
xy
Iteration 2
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
3
x
y
Iteration 3
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
3
x
y
Iteration 4
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
3
x
y
Iteration 5
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
3
x
y
Iteration 6
Prof. Pier Luca Lanzi
Importance of Choosing the Initial
Centroids
26
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
3
x
y
Iteration 1
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
3
x
y
Iteration 2
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
3
x
y
Iteration 3
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
3
x
y
Iteration 4
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
3
x
y
Iteration 5
Prof. Pier Luca Lanzi
27Why Selecting the Best Initial
Centroids is Difficult?
• If there are K ‘real’ clusters then the chance of selecting one
centroid from each cluster is small.
• Chance is relatively small when K is large
• If clusters are the same size, n, then
• For example, if K = 10, then probability = 10!/1010 = 0.00036
• Sometimes the initial centroids will readjust themselves in ‘right’
way, and sometimes they don’t
• Consider an example of five pairs of clusters
Prof. Pier Luca Lanzi
Ten Clusters Example 28
0 5 10 15 20
-6
-4
-2
0
2
4
6
8
x
y
Iteration 1
0 5 10 15 20
-6
-4
-2
0
2
4
6
8
x
y
Iteration 2
0 5 10 15 20
-6
-4
-2
0
2
4
6
8
x
y
Iteration 3
0 5 10 15 20
-6
-4
-2
0
2
4
6
8
x
y
Iteration 4
Starting with two initial centroids in one cluster of each pair of clusters
Prof. Pier Luca Lanzi
10 Clusters Example 29
Starting with some pairs of clusters having three initial centroids, while other have only one.
0 5 10 15 20
-6
-4
-2
0
2
4
6
8
x
y
Iteration 1
0 5 10 15 20
-6
-4
-2
0
2
4
6
8
x
y
Iteration 2
0 5 10 15 20
-6
-4
-2
0
2
4
6
8
x
y
Iteration 3
0 5 10 15 20
-6
-4
-2
0
2
4
6
8
x
y
Iteration 4
Prof. Pier Luca Lanzi
30Dealing with the Initial
Centroids Issue
• Multiple runs, helps, but probability is not on your side
• Sample and use another clustering method (hierarchical?) to
determine initial centroids
• Select more than k initial centroids and then select among these
initial centroids
• Postprocessing
• Bisecting K-means, not as susceptible to initialization issues
Prof. Pier Luca Lanzi
31Updating Centers Incrementally
• In the basic K-means algorithm, centroids are updated after all
points are assigned to a centroid
• An alternative is to update the centroids after each assignment
(incremental approach)
§Each assignment updates zero or two centroids
§More expensive
§Introduces an order dependency
§Never get an empty cluster
§Can use “weights” to change the impact
Prof. Pier Luca Lanzi
32Pre-processing and Post-processing
• Pre-processing
§Normalize the data
§Eliminate outliers
• Post-processing
§Eliminate small clusters that may represent outliers
§Split ‘loose’ clusters, i.e., clusters with relatively high SSE
§Merge clusters that are ‘close’ and
that have relatively low SSE
§These steps can be used during the clustering process
Prof. Pier Luca Lanzi
Bisecting K-means
• Variant of K-means that can produce
a partitional or a hierarchical clustering
33
Prof. Pier Luca Lanzi
Bisecting K-means Example 34
Prof. Pier Luca Lanzi
Limitation of k-Means
35
Prof. Pier Luca Lanzi
36Limitations of K-means
• K-means has problems when clusters are of differing
§Sizes
§Densities
§Non-globular shapes
• K-means has also problems when the data contains outliers.
Prof. Pier Luca Lanzi
Limitations of K-means:
Differing Sizes
37
Original Points K-means (3 Clusters)
Prof. Pier Luca Lanzi
Limitations of K-means:
Differing Density
38
Original Points K-means (3 Clusters)
Prof. Pier Luca Lanzi
Limitations of K-means:
Non-globular Shapes
39
Original Points K-means (2 Clusters)
Prof. Pier Luca Lanzi
Overcoming K-means Limitations 40
Original Points K-means Clusters
One solution is to use many clusters.
Find parts of clusters, but need to put together.
Prof. Pier Luca Lanzi
Overcoming K-means Limitations 41
Original Points K-means Clusters
Prof. Pier Luca Lanzi
Overcoming K-means Limitations 42
Original Points K-means Clusters
Prof. Pier Luca Lanzi
43K-Means Clustering Summary
• Strength
§Relatively efficient
§Often terminates at a local optimum
§The global optimum may be found using techniques such as:
deterministic annealing and genetic algorithms
• Weakness
§Applicable only when mean is defined, then what about
categorical data?
§Need to specify k, the number of clusters, in advance
§Unable to handle noisy data and outliers
§Not suitable to discover clusters with non-convex shapes
Prof. Pier Luca Lanzi
44K-Means Clustering Summary
• Advantages
§Simple, understandable
§Items automatically assigned to clusters
• Disadvantages
§Must pick number of clusters before hand
§All items forced into a cluster
§Too sensitive to outliers
Prof. Pier Luca Lanzi
45Variations of the K-Means Method
• A few variants of the k-means which differ in
§Selection of the initial k means
§Dissimilarity calculations
§Strategies to calculate cluster means
• Handling categorical data: k-modes
§Replacing means of clusters with modes
§Using new dissimilarity measures
to deal with categorical objects
§Using a frequency-based method
to update modes of clusters
§A mixture of categorical and numerical data:
k-prototype method
Prof. Pier Luca Lanzi
46Variations of the K-Means Method
• A few variants of the k-means which differ in
§Selection of the initial k means
§Dissimilarity calculations
§Strategies to calculate cluster means
• Handling categorical data: k-modes
§Replacing means of clusters with modes
§Using new dissimilarity measures
to deal with categorical objects
§Using a frequency-based method
to update modes of clusters
§A mixture of categorical and numerical data:
k-prototype method
Prof. Pier Luca Lanzi
The BFR Algorithm
Prof. Pier Luca Lanzi
The BFR Algorithm
• BFR [Bradley-Fayyad-Reina] is a variant of k-means designed to
handle very large (disk-resident) data sets
• Assumes that clusters are normally distributed around a centroid
in a Euclidean space
• Standard deviations in different dimensions may vary
• Clusters are axis-aligned ellipses
• Efficient way to summarize clusters (want
memory required O(clusters) and not O(data))
48
Prof. Pier Luca Lanzi
The BFR Algorithm
• Points are read from disk one chunk at the time (so to fit into
main memory)
• Most points from previous memory loads are summarized by
simple statistics
• To begin, from the initial load we select the initial k centroids by
some sensible approach
§Take k random points
§Take a small random sample and cluster optimally
§Take a sample; pick a random point, and then
k–1 more points, each as far from the previously selected
points as possible
49
Prof. Pier Luca Lanzi
Three Classes of Points
• Discard set (DS)
§Points close enough to a centroid to be summarized
• Compression set (CS)
§Groups of points that are close together but not close to any
existing centroid
§These points are summarized, but not assigned to a cluster
• Retained set (RS)
§Isolated points waiting to be assigned to a compression set
50
Prof. Pier Luca Lanzi
The Status of BFR Algorithm 51
A cluster. Its points
are in the DS.
The centroid
Compressed sets.
Their points are in
the CS.
Points in
the RS
Discard set (DS): Close enough to a centroid to be summarized
Compression set (CS): Summarized, but not assigned to a cluster
Retained set (RS): Isolated points
Prof. Pier Luca Lanzi
Summarizing Sets of Points
• For each cluster, the discard set (DS) is summarized by:
• The number of points, N
• The vector SUM, whose component SUM(i) is the sum of the
coordinates of the points in the ith dimension
• The vector SUMSQ whose component SUMSQ(i) is the sum of
squares of coordinates in ith dimension
52
A cluster.
All its points are in the DS.
The centroid
Prof. Pier Luca Lanzi
Summarizing Points: Comments
• 2d + 1 values represent any size cluster
(d is the number of dimensions)
• Average in each dimension (the centroid) can be calculated as
SUM(i)/N
• Variance of a cluster’s discard set in dimension i is computed as
(SUMSQ(i)/N) – (SUM(i)/N)2
• And standard deviation is the square root of that variance
53
Prof. Pier Luca Lanzi
Processing Data in the BFR Algorithm
1. First, all points that are “sufficiently close” to the centroid of a cluster are added to
that cluster (by updating its parameters) then the point is discharged
2. The points that are not “sufficiently close” to any centroid are clustered along with the
points in the retained set. Any algorithm can be used even the hierarchical one in this
step.
3. The miniclusters derived for new points and the old retained set are merged (e.g., by
using the same criteria used for hierarchical clustering)
4. Any point outside a cluster or a minicluster are dropped.
When the last chunk of data is processed, the remaining miniclusters and the points in the
retained set which might be labeled as outliers or alternatively can be assigned to one of
the centroids (as k-means would do).
Note that for miniclusters we only have N, SUM and SUMSQ so it is easier to used criteria
based on variance and similar statistics. So we might combine two clusters if their combined
variance is below some threshold.
54
Prof. Pier Luca Lanzi
“Sufficiently Close”
• Two approaches have been proposed to determine whether a point is
sufficiently close to a cluster
• Add p to a cluster if
§ It has the centroid closest to p
§ It is also very unlikely that, after all the points have been processed, some
other cluster centroid will be found to be nearer to p
• We can measure the probability that, if p belongs to a cluster, it would be
found as far as it is from the centroid of that cluster
§ This is where the assumption about the clusters containing normally
distributed points aligned with the axes of the space is used
55
Prof. Pier Luca Lanzi
Mahalanobis Distance
• It is used to decide whether a point is closed enough to a cluster
• It is computed as the distance between a point and the centroid of a cluster,
normalized by the standard deviation of the cluster in each dimension.
• Given p = (p1, … pd) and c = (c1, … cd), the Mahalanobis distance between p
and c is computed as
• We assign p to the cluster with the least Mahalanobis from p provided that the
distance is below a certain threshold. A threshold of 4 means that we have
only a chance in a million not to include something that belongs to the cluster
56
Prof. Pier Luca Lanzi
k-Means for Arbitrary Shapes
(the CURE algorithm)
Prof. Pier Luca Lanzi
The CURE Algorithm
• Problem with BFR/k-means:
§Assumes clusters are normally
distributed in each dimension
§And axes are fixed – ellipses at
an angle are not OK
• CURE (Clustering Using REpresentatives):
§Assumes a Euclidean distance
§Allows clusters to assume any shape
§Uses a collection of representative
points to represent clusters
58
Vs.
Prof. Pier Luca Lanzi
k-means BFR
and
these?
Prof. Pier Luca Lanzi
e e
e
e
e e
e
e e
e
e
h
h
h
h
h
h
h h
h
h
h
h h
salary
age
salary of humanities vs engineering
Prof. Pier Luca Lanzi
e e
e
e
e e
e
e e
e
e
h
h
h
h
h
h
h h
h
h
h
h h
salary
age
salary of humanities vs engineering
Prof. Pier Luca Lanzi
Starting CURE – Pass 1 of 2
• Pick a random sample of points that fit into main memory
• Cluster sample points to create initial clusters (e.g. using
hierarchical clustering)
• Pick representative points
§For each cluster pick k representative points
(as disperse as possible)
§Create synthetic representative points by moving
the k points toward the centroid of the cluster (e.g. 20%)
62
Prof. Pier Luca Lanzi
e e
e
e
e e
e
e e
e
e
h
h
h
h
h
h
h h
h
h
h
h h
salary
age
salary of humanities vs engineering
Prof. Pier Luca Lanzi
e e
e
e
e e
e
e e
e
e
h
h
h
h
h
h
h h
h
h
h
h h
salary
age
salary of humanities vs engineering
synthetic
representative
points
Prof. Pier Luca Lanzi
Starting CURE – Pass 2 of 2
• Rescan the whole dataset (from secondary memory) and for
each point p
• Place p in the “closest cluster” that is the cluster that has a
representative that is closest to p
65
Prof. Pier Luca Lanzi
Expectation Maximization
Prof. Pier Luca Lanzi
Expectation-Maximization (EM)
Clustering
• k-means assigns each point to only one cluster (hard assignment)
• The approach can be extended to consider soft assignment of points to
clusters, so that each point has a probability of belonging to each cluster
• We assume that each cluster Ci is characterized by a multivariate normal
distribution and thus identified by
§ The mean vector Οi
§ The covariance matrix Σi
• A clustering is identified by a vector of parameter θ defined as
θ = {Οi Σi P(Ci)}
where P(Ci) are the prior probability of all the clusters Ci which sum up to one
67
Prof. Pier Luca Lanzi
Expectation-Maximization (EM)
Clustering
• The goal of maximum likelihood estimation (MLE) is to choose the parameters
θ that maximize the likelihood, that is
• General idea
§ Starts with an initial estimate of the parameter vector
§ Iteratively rescores the patterns against the mixture density produced by
the parameter vector
§ The rescored patterns are used to update the parameter updates
§ Patterns belonging to the same cluster, if they are placed by their scores in
a particular component
68
Prof. Pier Luca Lanzi
The EM (Expectation Maximization)
Algorithm
• Initially, randomly assign k cluster centers
• Iteratively refine the clusters based on two steps
• Expectation step
§ Assign each data point xi to cluster Ci with the following probability
where p(xi|Ck) follows the normal distribution.
• This step calculates the probability of cluster membership of xi for each Ck
• Maximization step
§ The model parameters are estimated from the updated probabilities.
§ For instance, for the mean,
69
Prof. Pier Luca Lanzi
Run the Python notebooks for the
algorithms included in this lecture
Prof. Pier Luca Lanzi
Examples using R
Prof. Pier Luca Lanzi
k-Means Clustering in R
set.seed(1234)
# random generated points
x<-rnorm(12, mean=rep(1:3,each=4), sd=0.2)
y<-rnorm(12, mean=rep(c(1,2,1),each=4), sd=0.2)
plot(x,y,pch=19,cex=2,col="blue")
# distance matrix
d <- data.frame(x,y)
km <- kmeans(d, 3)
names(km)
plot(x,y,pch=19,cex=2,col="blue")
par(new=TRUE)
plot(km$centers[,1], km$centers[,2], pch=19, cex=2, col="red")
72
Prof. Pier Luca Lanzi
k-Means Clustering in R
# generate other random centroids to start with
km <- kmeans(d, 3, centers=cbind(runif(3,0,3),runif(3,0,2)))
plot(x,y,pch=19,cex=2,col="blue")
par(new=TRUE)
plot(km$centers[,1], km$centers[,2], pch=19, cex=2, col="red")
73
Prof. Pier Luca Lanzi
Evaluation on k-Means & Number of
Clusters
###
### Evaluate clustering in kmeans using elbow/knee analysis
###
library(foreign)
library(GMD)
iris = read.arff("iris.arff")
# init two vectors that will contain the evaluation
# in terms of within and between sum of squares
plot_wss = rep(0,12)
plot_bss = rep(0,12)
# evaluate every clustering
for(i in 1:12)
{
cl <- kmeans(iris[,1:4],i)
plot_wss[i] <- cl$tot.withinss
plot_bss[i] <- cl$betweenss;
}
74
Prof. Pier Luca Lanzi
Evaluation on k-Means & Number of
Clusters
# plot the results
x = 1:12
plot(x, y=plot_bss, main="Within/Between Cluster Sum-of-square", cex=2,
pch=18, col="blue", xlab="Number of Clusters", ylab="Evaluation",
ylim=c(0,700))
lines(x, plot_bss, col="blue")
par(new=TRUE)
plot(x, y=plot_wss, cex=2, pch=19, col="red", ylab="", xlab="",
ylim=c(0,700))
lines(x,plot_wss, col="red");
75
Prof. Pier Luca Lanzi
Elbow & Knee Analysis 76
Prof. Pier Luca Lanzi
http://en.wikibooks.org/wiki/Data_Mining_Algorithms_In_R/Clustering/K-Means
http://en.wikibooks.org/wiki/Data_Mining_Algorithms_In_R/Clustering/Expectation_Maximization_(EM)
Software Packages

More Related Content

What's hot

Parallel and Distributed Information Retrieval System
Parallel and Distributed Information Retrieval SystemParallel and Distributed Information Retrieval System
Parallel and Distributed Information Retrieval Systemvimalsura
 
Machine Learning and Data Mining: 04 Association Rule Mining
Machine Learning and Data Mining: 04 Association Rule MiningMachine Learning and Data Mining: 04 Association Rule Mining
Machine Learning and Data Mining: 04 Association Rule MiningPier Luca Lanzi
 
What is Deep Learning and how it helps to Healthcare Sector?
What is Deep Learning and how it helps to Healthcare Sector?What is Deep Learning and how it helps to Healthcare Sector?
What is Deep Learning and how it helps to Healthcare Sector?Cogito Tech LLC
 
Document Automation
Document AutomationDocument Automation
Document AutomationKevin Clifford
 
Unsupervised Data Augmentation for Consistency Training
Unsupervised Data Augmentation for Consistency TrainingUnsupervised Data Augmentation for Consistency Training
Unsupervised Data Augmentation for Consistency TrainingSungchul Kim
 
Neural Networks in Data Mining - “An Overview”
Neural Networks  in Data Mining -   “An Overview”Neural Networks  in Data Mining -   “An Overview”
Neural Networks in Data Mining - “An Overview”Dr.(Mrs).Gethsiyal Augasta
 
Data Mining: Graph mining and social network analysis
Data Mining: Graph mining and social network analysisData Mining: Graph mining and social network analysis
Data Mining: Graph mining and social network analysisDataminingTools Inc
 
Database auditing essentials
Database auditing essentialsDatabase auditing essentials
Database auditing essentialsCraig Mullins
 
Data Mining Techniques
Data Mining TechniquesData Mining Techniques
Data Mining TechniquesHouw Liong The
 
WEB BASED INFORMATION RETRIEVAL SYSTEM
WEB BASED INFORMATION RETRIEVAL SYSTEMWEB BASED INFORMATION RETRIEVAL SYSTEM
WEB BASED INFORMATION RETRIEVAL SYSTEMSai Kumar Ale
 
CS6010 Social Network Analysis Unit III
CS6010 Social Network Analysis   Unit IIICS6010 Social Network Analysis   Unit III
CS6010 Social Network Analysis Unit IIIpkaviya
 
Data Mining: Future Trends and Applications
Data Mining: Future Trends and ApplicationsData Mining: Future Trends and Applications
Data Mining: Future Trends and ApplicationsIJMER
 
Information retrieval 14 fuzzy set models of ir
Information retrieval 14 fuzzy set models of irInformation retrieval 14 fuzzy set models of ir
Information retrieval 14 fuzzy set models of irVaibhav Khanna
 
How Powerful are Graph Networks?
How Powerful are Graph Networks?How Powerful are Graph Networks?
How Powerful are Graph Networks?IAMAl
 
Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...
Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...
Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...Simplilearn
 
Term weighting
Term weightingTerm weighting
Term weightingPrimya Tamil
 
Neural Networks: Support Vector machines
Neural Networks: Support Vector machinesNeural Networks: Support Vector machines
Neural Networks: Support Vector machinesMostafa G. M. Mostafa
 

What's hot (20)

Parallel and Distributed Information Retrieval System
Parallel and Distributed Information Retrieval SystemParallel and Distributed Information Retrieval System
Parallel and Distributed Information Retrieval System
 
Lecture - Data Mining
Lecture - Data MiningLecture - Data Mining
Lecture - Data Mining
 
Session-Based Recommender Systems
Session-Based Recommender SystemsSession-Based Recommender Systems
Session-Based Recommender Systems
 
Machine Learning and Data Mining: 04 Association Rule Mining
Machine Learning and Data Mining: 04 Association Rule MiningMachine Learning and Data Mining: 04 Association Rule Mining
Machine Learning and Data Mining: 04 Association Rule Mining
 
What is Deep Learning and how it helps to Healthcare Sector?
What is Deep Learning and how it helps to Healthcare Sector?What is Deep Learning and how it helps to Healthcare Sector?
What is Deep Learning and how it helps to Healthcare Sector?
 
Document Automation
Document AutomationDocument Automation
Document Automation
 
Unsupervised Data Augmentation for Consistency Training
Unsupervised Data Augmentation for Consistency TrainingUnsupervised Data Augmentation for Consistency Training
Unsupervised Data Augmentation for Consistency Training
 
Neural Networks in Data Mining - “An Overview”
Neural Networks  in Data Mining -   “An Overview”Neural Networks  in Data Mining -   “An Overview”
Neural Networks in Data Mining - “An Overview”
 
Data Mining: Graph mining and social network analysis
Data Mining: Graph mining and social network analysisData Mining: Graph mining and social network analysis
Data Mining: Graph mining and social network analysis
 
Database auditing essentials
Database auditing essentialsDatabase auditing essentials
Database auditing essentials
 
Data Mining Techniques
Data Mining TechniquesData Mining Techniques
Data Mining Techniques
 
WEB BASED INFORMATION RETRIEVAL SYSTEM
WEB BASED INFORMATION RETRIEVAL SYSTEMWEB BASED INFORMATION RETRIEVAL SYSTEM
WEB BASED INFORMATION RETRIEVAL SYSTEM
 
CS6010 Social Network Analysis Unit III
CS6010 Social Network Analysis   Unit IIICS6010 Social Network Analysis   Unit III
CS6010 Social Network Analysis Unit III
 
Data Mining: Future Trends and Applications
Data Mining: Future Trends and ApplicationsData Mining: Future Trends and Applications
Data Mining: Future Trends and Applications
 
04 data mining : data generelization
04 data mining : data generelization04 data mining : data generelization
04 data mining : data generelization
 
Information retrieval 14 fuzzy set models of ir
Information retrieval 14 fuzzy set models of irInformation retrieval 14 fuzzy set models of ir
Information retrieval 14 fuzzy set models of ir
 
How Powerful are Graph Networks?
How Powerful are Graph Networks?How Powerful are Graph Networks?
How Powerful are Graph Networks?
 
Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...
Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...
Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...
 
Term weighting
Term weightingTerm weighting
Term weighting
 
Neural Networks: Support Vector machines
Neural Networks: Support Vector machinesNeural Networks: Support Vector machines
Neural Networks: Support Vector machines
 

Similar to DMTM Lecture 13 Representative based clustering

DMTM 2015 - 08 Representative-Based Clustering
DMTM 2015 - 08 Representative-Based ClusteringDMTM 2015 - 08 Representative-Based Clustering
DMTM 2015 - 08 Representative-Based ClusteringPier Luca Lanzi
 
DMTM Lecture 12 Hierarchical clustering
DMTM Lecture 12 Hierarchical clusteringDMTM Lecture 12 Hierarchical clustering
DMTM Lecture 12 Hierarchical clusteringPier Luca Lanzi
 
Training machine learning k means 2017
Training machine learning k means 2017Training machine learning k means 2017
Training machine learning k means 2017Iwan Sofana
 
Selection K in K-means Clustering
Selection K in K-means ClusteringSelection K in K-means Clustering
Selection K in K-means ClusteringJunghoon Kim
 
Data Mining Lecture_7.pptx
Data Mining Lecture_7.pptxData Mining Lecture_7.pptx
Data Mining Lecture_7.pptxSubrata Kumer Paul
 
DMTM 2015 - 07 Hierarchical Clustering
DMTM 2015 - 07 Hierarchical ClusteringDMTM 2015 - 07 Hierarchical Clustering
DMTM 2015 - 07 Hierarchical ClusteringPier Luca Lanzi
 
clustering_hierarchical ckustering notes.pdf
clustering_hierarchical ckustering notes.pdfclustering_hierarchical ckustering notes.pdf
clustering_hierarchical ckustering notes.pdfp_manimozhi
 
Mathematics online: some common algorithms
Mathematics online: some common algorithmsMathematics online: some common algorithms
Mathematics online: some common algorithmsMark Moriarty
 
Clustering - ACM 2013 02-25
Clustering - ACM 2013 02-25Clustering - ACM 2013 02-25
Clustering - ACM 2013 02-25MapR Technologies
 
Pattern recognition binoy k means clustering
Pattern recognition binoy  k means clusteringPattern recognition binoy  k means clustering
Pattern recognition binoy k means clustering108kaushik
 
Advanced database and data mining & clustering concepts
Advanced database and data mining & clustering conceptsAdvanced database and data mining & clustering concepts
Advanced database and data mining & clustering conceptsNithyananthSengottai
 
Oxford 05-oct-2012
Oxford 05-oct-2012Oxford 05-oct-2012
Oxford 05-oct-2012Ted Dunning
 
machine learning - Clustering in R
machine learning - Clustering in Rmachine learning - Clustering in R
machine learning - Clustering in RSudhakar Chavan
 
ACM 2013-02-25
ACM 2013-02-25ACM 2013-02-25
ACM 2013-02-25Ted Dunning
 
Fast Single-pass K-means Clusterting at Oxford
Fast Single-pass K-means Clusterting at Oxford Fast Single-pass K-means Clusterting at Oxford
Fast Single-pass K-means Clusterting at Oxford MapR Technologies
 
Sudoku solver
Sudoku solverSudoku solver
Sudoku solverPankti Fadia
 
CSA 3702 machine learning module 3
CSA 3702 machine learning module 3CSA 3702 machine learning module 3
CSA 3702 machine learning module 3Nandhini S
 

Similar to DMTM Lecture 13 Representative based clustering (20)

DMTM 2015 - 08 Representative-Based Clustering
DMTM 2015 - 08 Representative-Based ClusteringDMTM 2015 - 08 Representative-Based Clustering
DMTM 2015 - 08 Representative-Based Clustering
 
DMTM Lecture 12 Hierarchical clustering
DMTM Lecture 12 Hierarchical clusteringDMTM Lecture 12 Hierarchical clustering
DMTM Lecture 12 Hierarchical clustering
 
Training machine learning k means 2017
Training machine learning k means 2017Training machine learning k means 2017
Training machine learning k means 2017
 
Selection K in K-means Clustering
Selection K in K-means ClusteringSelection K in K-means Clustering
Selection K in K-means Clustering
 
Data Mining Lecture_7.pptx
Data Mining Lecture_7.pptxData Mining Lecture_7.pptx
Data Mining Lecture_7.pptx
 
DMTM 2015 - 07 Hierarchical Clustering
DMTM 2015 - 07 Hierarchical ClusteringDMTM 2015 - 07 Hierarchical Clustering
DMTM 2015 - 07 Hierarchical Clustering
 
clustering_hierarchical ckustering notes.pdf
clustering_hierarchical ckustering notes.pdfclustering_hierarchical ckustering notes.pdf
clustering_hierarchical ckustering notes.pdf
 
Mathematics online: some common algorithms
Mathematics online: some common algorithmsMathematics online: some common algorithms
Mathematics online: some common algorithms
 
Clustering - ACM 2013 02-25
Clustering - ACM 2013 02-25Clustering - ACM 2013 02-25
Clustering - ACM 2013 02-25
 
Pattern recognition binoy k means clustering
Pattern recognition binoy  k means clusteringPattern recognition binoy  k means clustering
Pattern recognition binoy k means clustering
 
Advanced database and data mining & clustering concepts
Advanced database and data mining & clustering conceptsAdvanced database and data mining & clustering concepts
Advanced database and data mining & clustering concepts
 
Oxford 05-oct-2012
Oxford 05-oct-2012Oxford 05-oct-2012
Oxford 05-oct-2012
 
machine learning - Clustering in R
machine learning - Clustering in Rmachine learning - Clustering in R
machine learning - Clustering in R
 
ACM 2013-02-25
ACM 2013-02-25ACM 2013-02-25
ACM 2013-02-25
 
Fast Single-pass K-means Clusterting at Oxford
Fast Single-pass K-means Clusterting at Oxford Fast Single-pass K-means Clusterting at Oxford
Fast Single-pass K-means Clusterting at Oxford
 
Clustering.pdf
Clustering.pdfClustering.pdf
Clustering.pdf
 
Sudoku solver
Sudoku solverSudoku solver
Sudoku solver
 
Clustering.pptx
Clustering.pptxClustering.pptx
Clustering.pptx
 
CSA 3702 machine learning module 3
CSA 3702 machine learning module 3CSA 3702 machine learning module 3
CSA 3702 machine learning module 3
 
Clustering.pptx
Clustering.pptxClustering.pptx
Clustering.pptx
 

More from Pier Luca Lanzi

11 Settembre 2021 - Giocare con i Videogiochi
11 Settembre 2021 - Giocare con i Videogiochi11 Settembre 2021 - Giocare con i Videogiochi
11 Settembre 2021 - Giocare con i VideogiochiPier Luca Lanzi
 
Breve Viaggio al Centro dei Videogiochi
Breve Viaggio al Centro dei VideogiochiBreve Viaggio al Centro dei Videogiochi
Breve Viaggio al Centro dei VideogiochiPier Luca Lanzi
 
Global Game Jam 19 @ POLIMI - Morning Welcome
Global Game Jam 19 @ POLIMI - Morning WelcomeGlobal Game Jam 19 @ POLIMI - Morning Welcome
Global Game Jam 19 @ POLIMI - Morning WelcomePier Luca Lanzi
 
Data Driven Game Design @ Campus Party 2018
Data Driven Game Design @ Campus Party 2018Data Driven Game Design @ Campus Party 2018
Data Driven Game Design @ Campus Party 2018Pier Luca Lanzi
 
GGJ18 al Politecnico di Milano - Presentazione che precede la presentazione d...
GGJ18 al Politecnico di Milano - Presentazione che precede la presentazione d...GGJ18 al Politecnico di Milano - Presentazione che precede la presentazione d...
GGJ18 al Politecnico di Milano - Presentazione che precede la presentazione d...Pier Luca Lanzi
 
GGJ18 al Politecnico di Milano - Presentazione di apertura
GGJ18 al Politecnico di Milano - Presentazione di aperturaGGJ18 al Politecnico di Milano - Presentazione di apertura
GGJ18 al Politecnico di Milano - Presentazione di aperturaPier Luca Lanzi
 
Presentation for UNITECH event - January 8, 2018
Presentation for UNITECH event - January 8, 2018Presentation for UNITECH event - January 8, 2018
Presentation for UNITECH event - January 8, 2018Pier Luca Lanzi
 
DMTM Lecture 20 Data preparation
DMTM Lecture 20 Data preparationDMTM Lecture 20 Data preparation
DMTM Lecture 20 Data preparationPier Luca Lanzi
 
DMTM Lecture 19 Data exploration
DMTM Lecture 19 Data explorationDMTM Lecture 19 Data exploration
DMTM Lecture 19 Data explorationPier Luca Lanzi
 
DMTM Lecture 18 Graph mining
DMTM Lecture 18 Graph miningDMTM Lecture 18 Graph mining
DMTM Lecture 18 Graph miningPier Luca Lanzi
 
DMTM Lecture 17 Text mining
DMTM Lecture 17 Text miningDMTM Lecture 17 Text mining
DMTM Lecture 17 Text miningPier Luca Lanzi
 
DMTM Lecture 16 Association rules
DMTM Lecture 16 Association rulesDMTM Lecture 16 Association rules
DMTM Lecture 16 Association rulesPier Luca Lanzi
 
DMTM Lecture 14 Density based clustering
DMTM Lecture 14 Density based clusteringDMTM Lecture 14 Density based clustering
DMTM Lecture 14 Density based clusteringPier Luca Lanzi
 
DMTM Lecture 11 Clustering
DMTM Lecture 11 ClusteringDMTM Lecture 11 Clustering
DMTM Lecture 11 ClusteringPier Luca Lanzi
 
DMTM Lecture 10 Classification ensembles
DMTM Lecture 10 Classification ensemblesDMTM Lecture 10 Classification ensembles
DMTM Lecture 10 Classification ensemblesPier Luca Lanzi
 
DMTM Lecture 09 Other classificationmethods
DMTM Lecture 09 Other classificationmethodsDMTM Lecture 09 Other classificationmethods
DMTM Lecture 09 Other classificationmethodsPier Luca Lanzi
 
DMTM Lecture 08 Classification rules
DMTM Lecture 08 Classification rulesDMTM Lecture 08 Classification rules
DMTM Lecture 08 Classification rulesPier Luca Lanzi
 
DMTM Lecture 07 Decision trees
DMTM Lecture 07 Decision treesDMTM Lecture 07 Decision trees
DMTM Lecture 07 Decision treesPier Luca Lanzi
 
DMTM Lecture 06 Classification evaluation
DMTM Lecture 06 Classification evaluationDMTM Lecture 06 Classification evaluation
DMTM Lecture 06 Classification evaluationPier Luca Lanzi
 
DMTM Lecture 05 Data representation
DMTM Lecture 05 Data representationDMTM Lecture 05 Data representation
DMTM Lecture 05 Data representationPier Luca Lanzi
 

More from Pier Luca Lanzi (20)

11 Settembre 2021 - Giocare con i Videogiochi
11 Settembre 2021 - Giocare con i Videogiochi11 Settembre 2021 - Giocare con i Videogiochi
11 Settembre 2021 - Giocare con i Videogiochi
 
Breve Viaggio al Centro dei Videogiochi
Breve Viaggio al Centro dei VideogiochiBreve Viaggio al Centro dei Videogiochi
Breve Viaggio al Centro dei Videogiochi
 
Global Game Jam 19 @ POLIMI - Morning Welcome
Global Game Jam 19 @ POLIMI - Morning WelcomeGlobal Game Jam 19 @ POLIMI - Morning Welcome
Global Game Jam 19 @ POLIMI - Morning Welcome
 
Data Driven Game Design @ Campus Party 2018
Data Driven Game Design @ Campus Party 2018Data Driven Game Design @ Campus Party 2018
Data Driven Game Design @ Campus Party 2018
 
GGJ18 al Politecnico di Milano - Presentazione che precede la presentazione d...
GGJ18 al Politecnico di Milano - Presentazione che precede la presentazione d...GGJ18 al Politecnico di Milano - Presentazione che precede la presentazione d...
GGJ18 al Politecnico di Milano - Presentazione che precede la presentazione d...
 
GGJ18 al Politecnico di Milano - Presentazione di apertura
GGJ18 al Politecnico di Milano - Presentazione di aperturaGGJ18 al Politecnico di Milano - Presentazione di apertura
GGJ18 al Politecnico di Milano - Presentazione di apertura
 
Presentation for UNITECH event - January 8, 2018
Presentation for UNITECH event - January 8, 2018Presentation for UNITECH event - January 8, 2018
Presentation for UNITECH event - January 8, 2018
 
DMTM Lecture 20 Data preparation
DMTM Lecture 20 Data preparationDMTM Lecture 20 Data preparation
DMTM Lecture 20 Data preparation
 
DMTM Lecture 19 Data exploration
DMTM Lecture 19 Data explorationDMTM Lecture 19 Data exploration
DMTM Lecture 19 Data exploration
 
DMTM Lecture 18 Graph mining
DMTM Lecture 18 Graph miningDMTM Lecture 18 Graph mining
DMTM Lecture 18 Graph mining
 
DMTM Lecture 17 Text mining
DMTM Lecture 17 Text miningDMTM Lecture 17 Text mining
DMTM Lecture 17 Text mining
 
DMTM Lecture 16 Association rules
DMTM Lecture 16 Association rulesDMTM Lecture 16 Association rules
DMTM Lecture 16 Association rules
 
DMTM Lecture 14 Density based clustering
DMTM Lecture 14 Density based clusteringDMTM Lecture 14 Density based clustering
DMTM Lecture 14 Density based clustering
 
DMTM Lecture 11 Clustering
DMTM Lecture 11 ClusteringDMTM Lecture 11 Clustering
DMTM Lecture 11 Clustering
 
DMTM Lecture 10 Classification ensembles
DMTM Lecture 10 Classification ensemblesDMTM Lecture 10 Classification ensembles
DMTM Lecture 10 Classification ensembles
 
DMTM Lecture 09 Other classificationmethods
DMTM Lecture 09 Other classificationmethodsDMTM Lecture 09 Other classificationmethods
DMTM Lecture 09 Other classificationmethods
 
DMTM Lecture 08 Classification rules
DMTM Lecture 08 Classification rulesDMTM Lecture 08 Classification rules
DMTM Lecture 08 Classification rules
 
DMTM Lecture 07 Decision trees
DMTM Lecture 07 Decision treesDMTM Lecture 07 Decision trees
DMTM Lecture 07 Decision trees
 
DMTM Lecture 06 Classification evaluation
DMTM Lecture 06 Classification evaluationDMTM Lecture 06 Classification evaluation
DMTM Lecture 06 Classification evaluation
 
DMTM Lecture 05 Data representation
DMTM Lecture 05 Data representationDMTM Lecture 05 Data representation
DMTM Lecture 05 Data representation
 

Recently uploaded

The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13Steve Thomason
 
How to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxHow to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxmanuelaromero2013
 
Science 7 - LAND and SEA BREEZE and its Characteristics
Science 7 - LAND and SEA BREEZE and its CharacteristicsScience 7 - LAND and SEA BREEZE and its Characteristics
Science 7 - LAND and SEA BREEZE and its CharacteristicsKarinaGenton
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeThiyagu K
 
_Math 4-Q4 Week 5.pptx Steps in Collecting Data
_Math 4-Q4 Week 5.pptx Steps in Collecting Data_Math 4-Q4 Week 5.pptx Steps in Collecting Data
_Math 4-Q4 Week 5.pptx Steps in Collecting DataJhengPantaleon
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)eniolaolutunde
 
Employee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxEmployee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxNirmalaLoungPoorunde1
 
MENTAL STATUS EXAMINATION format.docx
MENTAL     STATUS EXAMINATION format.docxMENTAL     STATUS EXAMINATION format.docx
MENTAL STATUS EXAMINATION format.docxPoojaSen20
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptxVS Mahajan Coaching Centre
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfsanyamsingh5019
 
Solving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxSolving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxOH TEIK BIN
 
Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Educationpboyjonauth
 
Alper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentAlper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentInMediaRes1
 
CARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxCARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxGaneshChakor2
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...EduSkills OECD
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Sapana Sha
 
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991RKavithamani
 
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdfBASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdfSoniaTolstoy
 

Recently uploaded (20)

The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13
 
How to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxHow to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptx
 
Science 7 - LAND and SEA BREEZE and its Characteristics
Science 7 - LAND and SEA BREEZE and its CharacteristicsScience 7 - LAND and SEA BREEZE and its Characteristics
Science 7 - LAND and SEA BREEZE and its Characteristics
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and Mode
 
_Math 4-Q4 Week 5.pptx Steps in Collecting Data
_Math 4-Q4 Week 5.pptx Steps in Collecting Data_Math 4-Q4 Week 5.pptx Steps in Collecting Data
_Math 4-Q4 Week 5.pptx Steps in Collecting Data
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)
 
Employee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxEmployee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptx
 
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdfTataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
 
MENTAL STATUS EXAMINATION format.docx
MENTAL     STATUS EXAMINATION format.docxMENTAL     STATUS EXAMINATION format.docx
MENTAL STATUS EXAMINATION format.docx
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdf
 
Solving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxSolving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptx
 
Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Education
 
Alper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentAlper Gobel In Media Res Media Component
Alper Gobel In Media Res Media Component
 
CARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxCARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptx
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
 
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
 
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdfBASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
 
Model Call Girl in Bikash Puri Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Bikash Puri  Delhi reach out to us at 🔝9953056974🔝Model Call Girl in Bikash Puri  Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Bikash Puri Delhi reach out to us at 🔝9953056974🔝
 

DMTM Lecture 13 Representative based clustering

  • 1. Prof. Pier Luca Lanzi Representative-Based Clustering Data Mining andText Mining (UIC 583 @ Politecnico di Milano)
  • 2. Prof. Pier Luca Lanzi Readings • Mining of Massive Datasets (Chapter 7) • Data Mining and Analysis (Section 13.3) 2
  • 3. Prof. Pier Luca Lanzi How can we represent clusters?
  • 4. Prof. Pier Luca Lanzi Representation-Based Algorithms • Given a dataset of N instances, and a desired number of clusters k, this class of algorithms generates a partition C of N in k clusters {C1, C2, …, Ck} • For each cluster there is a point that summarizes the cluster • The common choice being the mean of the points in the cluster where ni = |Ci| and Îźi is the centroid 4
  • 5. Prof. Pier Luca Lanzi Representation-Based Algorithms • The goal of the clustering process is to select the best partition according to some scoring function • Sum of squared errors is the most common scoring function • The goal of the clustering process is thus to find • Brute-force Approach § Generate all the possible clustering C = {C1, C2, …, Ck} and select the best one. Unfortunately, there are O(kN/k!) possible partitions 5
  • 6. Prof. Pier Luca Lanzi k-Means Algorithm • Most widely known representative-based algorithm • Assumes an Euclidean space but can be easily extended to the non-Euclidean case • Employs a greedy iterative approaches that minimizes the SSE objective. Accordingly it can converge to a local optimal instead of a globally optimal clustering. 6
  • 7. Prof. Pier Luca Lanzi 1. Initially choose k points that are likely to be in different clusters; 2. Make these points the centroids of their clusters; 3. FOR each remaining point p DO Find the centroid to which p is closest; Add p to the cluster of that centroid; Adjust the centroid of that cluster to account for p; END;
  • 23. Prof. Pier Luca Lanzi Initializing Clusters • Solution 1 §Pick points that are as far away from one another as possible. • Variation of solution 1 Pick the first point at random; WHILE there are fewer than k points DO Add the point whose minimum distance from the selected points is as large as possible; END; • Solution 2 §Cluster a sample of the data, perhaps hierarchically, so there are k clusters. Pick a point from each cluster, perhaps that point closest to the centroid of the cluster. 23
  • 24. Prof. Pier Luca Lanzi Two different K-means Clusterings 24 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 0 0.5 1 1.5 2 2.5 3 x y -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 0 0.5 1 1.5 2 2.5 3 x y Sub-optimal Clustering -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 0 0.5 1 1.5 2 2.5 3 x y Optimal Clustering Original Points
  • 25. Prof. Pier Luca Lanzi Importance of Choosing the Initial Centroids 25 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 0 0.5 1 1.5 2 2.5 3 x y Iteration 1 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 0 0.5 1 1.5 2 2.5 3 xy Iteration 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 0 0.5 1 1.5 2 2.5 3 x y Iteration 3 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 0 0.5 1 1.5 2 2.5 3 x y Iteration 4 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 0 0.5 1 1.5 2 2.5 3 x y Iteration 5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 0 0.5 1 1.5 2 2.5 3 x y Iteration 6
  • 26. Prof. Pier Luca Lanzi Importance of Choosing the Initial Centroids 26 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 0 0.5 1 1.5 2 2.5 3 x y Iteration 1 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 0 0.5 1 1.5 2 2.5 3 x y Iteration 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 0 0.5 1 1.5 2 2.5 3 x y Iteration 3 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 0 0.5 1 1.5 2 2.5 3 x y Iteration 4 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 0 0.5 1 1.5 2 2.5 3 x y Iteration 5
  • 27. Prof. Pier Luca Lanzi 27Why Selecting the Best Initial Centroids is Difficult? • If there are K ‘real’ clusters then the chance of selecting one centroid from each cluster is small. • Chance is relatively small when K is large • If clusters are the same size, n, then • For example, if K = 10, then probability = 10!/1010 = 0.00036 • Sometimes the initial centroids will readjust themselves in ‘right’ way, and sometimes they don’t • Consider an example of five pairs of clusters
  • 28. Prof. Pier Luca Lanzi Ten Clusters Example 28 0 5 10 15 20 -6 -4 -2 0 2 4 6 8 x y Iteration 1 0 5 10 15 20 -6 -4 -2 0 2 4 6 8 x y Iteration 2 0 5 10 15 20 -6 -4 -2 0 2 4 6 8 x y Iteration 3 0 5 10 15 20 -6 -4 -2 0 2 4 6 8 x y Iteration 4 Starting with two initial centroids in one cluster of each pair of clusters
  • 29. Prof. Pier Luca Lanzi 10 Clusters Example 29 Starting with some pairs of clusters having three initial centroids, while other have only one. 0 5 10 15 20 -6 -4 -2 0 2 4 6 8 x y Iteration 1 0 5 10 15 20 -6 -4 -2 0 2 4 6 8 x y Iteration 2 0 5 10 15 20 -6 -4 -2 0 2 4 6 8 x y Iteration 3 0 5 10 15 20 -6 -4 -2 0 2 4 6 8 x y Iteration 4
  • 30. Prof. Pier Luca Lanzi 30Dealing with the Initial Centroids Issue • Multiple runs, helps, but probability is not on your side • Sample and use another clustering method (hierarchical?) to determine initial centroids • Select more than k initial centroids and then select among these initial centroids • Postprocessing • Bisecting K-means, not as susceptible to initialization issues
  • 31. Prof. Pier Luca Lanzi 31Updating Centers Incrementally • In the basic K-means algorithm, centroids are updated after all points are assigned to a centroid • An alternative is to update the centroids after each assignment (incremental approach) §Each assignment updates zero or two centroids §More expensive §Introduces an order dependency §Never get an empty cluster §Can use “weights” to change the impact
  • 32. Prof. Pier Luca Lanzi 32Pre-processing and Post-processing • Pre-processing §Normalize the data §Eliminate outliers • Post-processing §Eliminate small clusters that may represent outliers §Split ‘loose’ clusters, i.e., clusters with relatively high SSE §Merge clusters that are ‘close’ and that have relatively low SSE §These steps can be used during the clustering process
  • 33. Prof. Pier Luca Lanzi Bisecting K-means • Variant of K-means that can produce a partitional or a hierarchical clustering 33
  • 34. Prof. Pier Luca Lanzi Bisecting K-means Example 34
  • 35. Prof. Pier Luca Lanzi Limitation of k-Means 35
  • 36. Prof. Pier Luca Lanzi 36Limitations of K-means • K-means has problems when clusters are of differing §Sizes §Densities §Non-globular shapes • K-means has also problems when the data contains outliers.
  • 37. Prof. Pier Luca Lanzi Limitations of K-means: Differing Sizes 37 Original Points K-means (3 Clusters)
  • 38. Prof. Pier Luca Lanzi Limitations of K-means: Differing Density 38 Original Points K-means (3 Clusters)
  • 39. Prof. Pier Luca Lanzi Limitations of K-means: Non-globular Shapes 39 Original Points K-means (2 Clusters)
  • 40. Prof. Pier Luca Lanzi Overcoming K-means Limitations 40 Original Points K-means Clusters One solution is to use many clusters. Find parts of clusters, but need to put together.
  • 41. Prof. Pier Luca Lanzi Overcoming K-means Limitations 41 Original Points K-means Clusters
  • 42. Prof. Pier Luca Lanzi Overcoming K-means Limitations 42 Original Points K-means Clusters
  • 43. Prof. Pier Luca Lanzi 43K-Means Clustering Summary • Strength §Relatively efficient §Often terminates at a local optimum §The global optimum may be found using techniques such as: deterministic annealing and genetic algorithms • Weakness §Applicable only when mean is defined, then what about categorical data? §Need to specify k, the number of clusters, in advance §Unable to handle noisy data and outliers §Not suitable to discover clusters with non-convex shapes
  • 44. Prof. Pier Luca Lanzi 44K-Means Clustering Summary • Advantages §Simple, understandable §Items automatically assigned to clusters • Disadvantages §Must pick number of clusters before hand §All items forced into a cluster §Too sensitive to outliers
  • 45. Prof. Pier Luca Lanzi 45Variations of the K-Means Method • A few variants of the k-means which differ in §Selection of the initial k means §Dissimilarity calculations §Strategies to calculate cluster means • Handling categorical data: k-modes §Replacing means of clusters with modes §Using new dissimilarity measures to deal with categorical objects §Using a frequency-based method to update modes of clusters §A mixture of categorical and numerical data: k-prototype method
  • 46. Prof. Pier Luca Lanzi 46Variations of the K-Means Method • A few variants of the k-means which differ in §Selection of the initial k means §Dissimilarity calculations §Strategies to calculate cluster means • Handling categorical data: k-modes §Replacing means of clusters with modes §Using new dissimilarity measures to deal with categorical objects §Using a frequency-based method to update modes of clusters §A mixture of categorical and numerical data: k-prototype method
  • 47. Prof. Pier Luca Lanzi The BFR Algorithm
  • 48. Prof. Pier Luca Lanzi The BFR Algorithm • BFR [Bradley-Fayyad-Reina] is a variant of k-means designed to handle very large (disk-resident) data sets • Assumes that clusters are normally distributed around a centroid in a Euclidean space • Standard deviations in different dimensions may vary • Clusters are axis-aligned ellipses • Efficient way to summarize clusters (want memory required O(clusters) and not O(data)) 48
  • 49. Prof. Pier Luca Lanzi The BFR Algorithm • Points are read from disk one chunk at the time (so to fit into main memory) • Most points from previous memory loads are summarized by simple statistics • To begin, from the initial load we select the initial k centroids by some sensible approach §Take k random points §Take a small random sample and cluster optimally §Take a sample; pick a random point, and then k–1 more points, each as far from the previously selected points as possible 49
  • 50. Prof. Pier Luca Lanzi Three Classes of Points • Discard set (DS) §Points close enough to a centroid to be summarized • Compression set (CS) §Groups of points that are close together but not close to any existing centroid §These points are summarized, but not assigned to a cluster • Retained set (RS) §Isolated points waiting to be assigned to a compression set 50
  • 51. Prof. Pier Luca Lanzi The Status of BFR Algorithm 51 A cluster. Its points are in the DS. The centroid Compressed sets. Their points are in the CS. Points in the RS Discard set (DS): Close enough to a centroid to be summarized Compression set (CS): Summarized, but not assigned to a cluster Retained set (RS): Isolated points
  • 52. Prof. Pier Luca Lanzi Summarizing Sets of Points • For each cluster, the discard set (DS) is summarized by: • The number of points, N • The vector SUM, whose component SUM(i) is the sum of the coordinates of the points in the ith dimension • The vector SUMSQ whose component SUMSQ(i) is the sum of squares of coordinates in ith dimension 52 A cluster. All its points are in the DS. The centroid
  • 53. Prof. Pier Luca Lanzi Summarizing Points: Comments • 2d + 1 values represent any size cluster (d is the number of dimensions) • Average in each dimension (the centroid) can be calculated as SUM(i)/N • Variance of a cluster’s discard set in dimension i is computed as (SUMSQ(i)/N) – (SUM(i)/N)2 • And standard deviation is the square root of that variance 53
  • 54. Prof. Pier Luca Lanzi Processing Data in the BFR Algorithm 1. First, all points that are “sufficiently close” to the centroid of a cluster are added to that cluster (by updating its parameters) then the point is discharged 2. The points that are not “sufficiently close” to any centroid are clustered along with the points in the retained set. Any algorithm can be used even the hierarchical one in this step. 3. The miniclusters derived for new points and the old retained set are merged (e.g., by using the same criteria used for hierarchical clustering) 4. Any point outside a cluster or a minicluster are dropped. When the last chunk of data is processed, the remaining miniclusters and the points in the retained set which might be labeled as outliers or alternatively can be assigned to one of the centroids (as k-means would do). Note that for miniclusters we only have N, SUM and SUMSQ so it is easier to used criteria based on variance and similar statistics. So we might combine two clusters if their combined variance is below some threshold. 54
  • 55. Prof. Pier Luca Lanzi “Sufficiently Close” • Two approaches have been proposed to determine whether a point is sufficiently close to a cluster • Add p to a cluster if § It has the centroid closest to p § It is also very unlikely that, after all the points have been processed, some other cluster centroid will be found to be nearer to p • We can measure the probability that, if p belongs to a cluster, it would be found as far as it is from the centroid of that cluster § This is where the assumption about the clusters containing normally distributed points aligned with the axes of the space is used 55
  • 56. Prof. Pier Luca Lanzi Mahalanobis Distance • It is used to decide whether a point is closed enough to a cluster • It is computed as the distance between a point and the centroid of a cluster, normalized by the standard deviation of the cluster in each dimension. • Given p = (p1, … pd) and c = (c1, … cd), the Mahalanobis distance between p and c is computed as • We assign p to the cluster with the least Mahalanobis from p provided that the distance is below a certain threshold. A threshold of 4 means that we have only a chance in a million not to include something that belongs to the cluster 56
  • 57. Prof. Pier Luca Lanzi k-Means for Arbitrary Shapes (the CURE algorithm)
  • 58. Prof. Pier Luca Lanzi The CURE Algorithm • Problem with BFR/k-means: §Assumes clusters are normally distributed in each dimension §And axes are fixed – ellipses at an angle are not OK • CURE (Clustering Using REpresentatives): §Assumes a Euclidean distance §Allows clusters to assume any shape §Uses a collection of representative points to represent clusters 58 Vs.
  • 59. Prof. Pier Luca Lanzi k-means BFR and these?
  • 60. Prof. Pier Luca Lanzi e e e e e e e e e e e h h h h h h h h h h h h h salary age salary of humanities vs engineering
  • 61. Prof. Pier Luca Lanzi e e e e e e e e e e e h h h h h h h h h h h h h salary age salary of humanities vs engineering
  • 62. Prof. Pier Luca Lanzi Starting CURE – Pass 1 of 2 • Pick a random sample of points that fit into main memory • Cluster sample points to create initial clusters (e.g. using hierarchical clustering) • Pick representative points §For each cluster pick k representative points (as disperse as possible) §Create synthetic representative points by moving the k points toward the centroid of the cluster (e.g. 20%) 62
  • 63. Prof. Pier Luca Lanzi e e e e e e e e e e e h h h h h h h h h h h h h salary age salary of humanities vs engineering
  • 64. Prof. Pier Luca Lanzi e e e e e e e e e e e h h h h h h h h h h h h h salary age salary of humanities vs engineering synthetic representative points
  • 65. Prof. Pier Luca Lanzi Starting CURE – Pass 2 of 2 • Rescan the whole dataset (from secondary memory) and for each point p • Place p in the “closest cluster” that is the cluster that has a representative that is closest to p 65
  • 66. Prof. Pier Luca Lanzi Expectation Maximization
  • 67. Prof. Pier Luca Lanzi Expectation-Maximization (EM) Clustering • k-means assigns each point to only one cluster (hard assignment) • The approach can be extended to consider soft assignment of points to clusters, so that each point has a probability of belonging to each cluster • We assume that each cluster Ci is characterized by a multivariate normal distribution and thus identified by § The mean vector Îźi § The covariance matrix ÎŁi • A clustering is identified by a vector of parameter θ defined as θ = {Îźi ÎŁi P(Ci)} where P(Ci) are the prior probability of all the clusters Ci which sum up to one 67
  • 68. Prof. Pier Luca Lanzi Expectation-Maximization (EM) Clustering • The goal of maximum likelihood estimation (MLE) is to choose the parameters θ that maximize the likelihood, that is • General idea § Starts with an initial estimate of the parameter vector § Iteratively rescores the patterns against the mixture density produced by the parameter vector § The rescored patterns are used to update the parameter updates § Patterns belonging to the same cluster, if they are placed by their scores in a particular component 68
  • 69. Prof. Pier Luca Lanzi The EM (Expectation Maximization) Algorithm • Initially, randomly assign k cluster centers • Iteratively refine the clusters based on two steps • Expectation step § Assign each data point xi to cluster Ci with the following probability where p(xi|Ck) follows the normal distribution. • This step calculates the probability of cluster membership of xi for each Ck • Maximization step § The model parameters are estimated from the updated probabilities. § For instance, for the mean, 69
  • 70. Prof. Pier Luca Lanzi Run the Python notebooks for the algorithms included in this lecture
  • 71. Prof. Pier Luca Lanzi Examples using R
  • 72. Prof. Pier Luca Lanzi k-Means Clustering in R set.seed(1234) # random generated points x<-rnorm(12, mean=rep(1:3,each=4), sd=0.2) y<-rnorm(12, mean=rep(c(1,2,1),each=4), sd=0.2) plot(x,y,pch=19,cex=2,col="blue") # distance matrix d <- data.frame(x,y) km <- kmeans(d, 3) names(km) plot(x,y,pch=19,cex=2,col="blue") par(new=TRUE) plot(km$centers[,1], km$centers[,2], pch=19, cex=2, col="red") 72
  • 73. Prof. Pier Luca Lanzi k-Means Clustering in R # generate other random centroids to start with km <- kmeans(d, 3, centers=cbind(runif(3,0,3),runif(3,0,2))) plot(x,y,pch=19,cex=2,col="blue") par(new=TRUE) plot(km$centers[,1], km$centers[,2], pch=19, cex=2, col="red") 73
  • 74. Prof. Pier Luca Lanzi Evaluation on k-Means & Number of Clusters ### ### Evaluate clustering in kmeans using elbow/knee analysis ### library(foreign) library(GMD) iris = read.arff("iris.arff") # init two vectors that will contain the evaluation # in terms of within and between sum of squares plot_wss = rep(0,12) plot_bss = rep(0,12) # evaluate every clustering for(i in 1:12) { cl <- kmeans(iris[,1:4],i) plot_wss[i] <- cl$tot.withinss plot_bss[i] <- cl$betweenss; } 74
  • 75. Prof. Pier Luca Lanzi Evaluation on k-Means & Number of Clusters # plot the results x = 1:12 plot(x, y=plot_bss, main="Within/Between Cluster Sum-of-square", cex=2, pch=18, col="blue", xlab="Number of Clusters", ylab="Evaluation", ylim=c(0,700)) lines(x, plot_bss, col="blue") par(new=TRUE) plot(x, y=plot_wss, cex=2, pch=19, col="red", ylab="", xlab="", ylim=c(0,700)) lines(x,plot_wss, col="red"); 75
  • 76. Prof. Pier Luca Lanzi Elbow & Knee Analysis 76
  • 77. Prof. Pier Luca Lanzi http://en.wikibooks.org/wiki/Data_Mining_Algorithms_In_R/Clustering/K-Means http://en.wikibooks.org/wiki/Data_Mining_Algorithms_In_R/Clustering/Expectation_Maximization_(EM) Software Packages