SlideShare a Scribd company logo
1 of 45
Big data Clustering
Algorithms & Strategies
FARZAD NOZARIAN
AMIRKABIR UNIVERSITY OF TECHNOLOGY – MARCH 2015
1
Preprocessing
Goals:
1. To assure the quality of the data by reducing the noisy and irrelevant information that it
could contain
2. To reduce the size of the dataset, so the computational cost of the discovery task is also
reduced.
Reducing the size of dataset:
◦ Number of instances
◦ addressed by sampling (the sampled dataset should holds the same information that the whole dataset)
◦ Dimensionality reduction
◦ Feature selection
◦ Feature extraction
2
Clustering algorithms
Hierarchical methods
◦ Divisive
◦ Agglomerative
Based on similarity matrix for each pair of examples
Some algorithm consider this matrix as Graph;
Other algorithm reduce the matrix each iteration by merging two groups.
The main drawback of these algorithms is their computational cost. (o(n2))
Scanning the dataset many times!
3
Prototype/model based clustering
Prototype and model based clustering assume that clusters fit to a specific shape.
Goal: Discover how different numbers of these shapes can explain the spatial distribution of
the data.
Must used prototype based clustering is K-Means.
◦ K-Means assumes that clusters are defined by their center (the prototype) and have spherical shapes.
◦ To feet this shape K-Means minimizing the distances from the examples to these centers.
◦ solved iteratively using a gradient descent algorithm.
4
Density based clustering
DBSCAN
OPTICS is an extension of the original DBSCAN that uses heuristics to find good values for
DBSCAN parameters.
The main drawback of this methods comes from the cost of finding the nearest neighbors for
an example.
Indexing is a solution, but may be degraded with the number of dimensions to a linear search.
5
Grid based clustering
The basic idea: divide the space of instances in hyperrectangular cells by discretizing the
attributes of the dataset.
Clusters of arbitrary shapes.
Each cell is summarized by the sufficient statistics of the examples it contains.
Usually scale well, but it depends on the granularity of the discretization of the space of
examples.
The strategies used to prune the search space allow to largely reduce the computational cost
6
Scalability strategies
One-pass strategies
Summarization strategies
Sampling/batch strategies
Approximation strategies
Divide and conquer strategies
7
One-pass strategies
Reduce the number of scans of the data to only one.
This constraint may be usually forced by the circumstance that the dataset can not fit in
memory and it has to be obtained from disk.
This is used to perform a preprocess of the dataset.
This results in two stages algorithms, a first one that applies the one-pass strategy and a
second one that process in memory a summary of the data obtained by the first stage.
8
Summarization Strategies
Purpose: obtain a coarse approximation of the data without losing the information that
represent the different densities of examples.
Sufficient statistics like mean and variance.
The summarization can be performed single level, as a preprocess that is feed to a cluster
algorithm.
9
Sampling/batch strategies
Purpose: Allow to perform the processing in main memory for a part of the dataset.
In case of more than one sample of the data: The algorithm should be able to process raw data
and cluster summaries.
They scale on the size of the sampling and not on the size of the whole dataset.
The use of batches assume that the data can be processed sequentially and that after applying
a clustering algorithm to a batch, the result can be merged with the results from previous
batches.
Data stream!
10
Approximation strategies
These strategies assume that some computations can be saved or approximated with reduced
or null impact on the final result.
Algorithm dependent.
Most costly part of clustering algorithms corresponds to distance computation among
instances or among instances and prototypes.
E.g, some of these algorithms are iterative and the decision about what partition is assigned to
an example does not change after a few iterations. If this can be determined at an early stage, all
these distance computations can be avoided in successive iterations.
This strategy is usually combined with a summarization strategy where groups of examples are
reduced to a point that is used to decide if the decision can be performed using only that point
or the distances to all the examples have to be computed.
11
Divide and conquer strategies
Data can be divided in multiple independent datasets and that the clustering results can be
then merged on a final model.
12
Hierarchical Algorithms
13
PINK: A Scalable Algorithm for Single-Linkage Hierarchical Clustering on
Distributed-Memory Architectures (2013) (northwestern)
A scalable parallel algorithm for single-linkage hierarchical clustering based on decomposing a
problem instance into two different types of subproblems.
As PINK does not explicitly store a distance matrix, it can be applied to much larger problem
sizes.
Algorithm:
◦ Divide a large hierarchical clustering problem instance into a set of smaller sub-problems
◦ Calculate the hierarchical clustering dendrogram for each of these sub-problems
◦ Reconstruct the solution for the original dataset by combining the solutions to the sub-problems.
14
Leader-single-link (l-SL): A distance based clustering
method for arbitrary shaped clusters in large datasets (2011)
Divides the clustering process in two steps:
◦ One pass clustering algorithm: resulting in a set of cluster summaries that reduce the size of the
dataset.
◦ This new dataset fits in memory and can be processed using a single link hierarchical clustering
algorithm.
Leaders clustering method: is a single data-scan distance based partitional clustering method.
For a given threshold distance τ, it produces a set of leaders L incrementally. For each pattern
𝑥, if there is a leader 𝑙 ϵ L such that 𝑥 − 𝑙 ≤ 𝜏, then 𝑥 is assigned to a cluster represented by 𝑙.
If there is no such leader, then 𝑥 becomes a new leader.
15
One-passSummarization
Leader-single-link (l-SL) (cont.)
The k-means also is a leader algorithm but it is applicable to numerical dataset only and scans
dataset more than once before convergence.
After producing the leaders, the leaders set is further clustered using SL method with cut-off
distance ℎ which results in clustering of leaders.
Finally, each leader is replaced by its followers to produce final clustering.
16
Density Based Algorithms
17
PDBSCAN
1. Divide the input into several partitions, and distribute these partitions to the available
computers
2. Cluster partitions concurrently using DBSCAN
3. Combine or merge the clustering's of the partitions into a clustering of the whole database.
4. In distributed environment we should care about data placement:
◦ Load balancing: the partitions should be almost of equal size if we assume that all computers have the
same performance
◦ Minimized communication cost: should avoid accessing those data located on any of the other
computers
◦ Distributed data access: This is not applicable for MR!
18
DivideandconquerdR*-tree
PDBSCAN (Cont.)
Algorithm is based on the R*-tree, provides not only a spatial data placement strategy for
clustering, but also efficient access to spatial data in a shared nothing architecture through the
replication of indices.
Proposed data placement solution: grouping the MBRs of leaf nodes of the R*-tree into N
partitions such that the nearby MBRs should be assigned to the same partition and the
partitions should be almost of equal size with respect to the number of MBRs.
How this solution can be achieved? use space filling Hilbert curves
For a given R*-tree, this method works as follows:
◦ Every data page of the R*-tree is assigned to a Hilbert value according to its center of gravity. So,
successive data pages will be close in space.
◦ Sort the list of pairs by ascending Hilbert values.
◦ If the R*-tree has d data pages and we have n slaves, every slave obtains d/n data pages of the sorted
list
19
PDBSCAN (Cont.)
Proposed efficient access to the distributed data solution: replicate the directory of the R*-tree
on all available computers (dR*-tree)
Now the PDBSCAN algorithm:
◦ Starts with an arbitrary point p within S and retrieves all points which are density-reachable from p
◦ If p is not a core point, no points are density-reachable from p: visits the next point in partition S
◦ If all members of C are contained in S: C is also a cluster
◦ If there are members of C outside of S: C may need to be merged with another cluster found call C a
merging candidate
20
PDBSCAN (Cont.)
The master PDBSCAN receives a list of merging candidates from every SLAVE.
PDBSCAN collects all the lists L it receives and assigns them to a list LL.
A merging function is noting else a nested loop that check for each pair of cluster if their
intersection aren’t empty!
21
MR-DBSCAN (2011)
Implement it by a 4-stages MapReduce paradigm.
Contributions: quick partitioning strategy for large scale non-indexed data.
Challenges of designing DBSCAN in MapReduce:
◦ Data interchange mechanism is limited. Data transferring between map and reduce is not encouraged.
◦ MapReduce doesn’t provide any mechanism such as R-tree, KD-tree to improve multidimensional search.
◦ Maximum parallelism can be achieved when the data is well balanced.
PDBSCAN was has been the basis of their work.
However it aggregate intermediate results in a single node, and MR-DBSCAN optimize this
issue.
22
GridMR
MR-DBSCAN (2011) (Cont.)
Stage 1: Preprocessing:
◦ Main challenges for a partitioning strategy are:
◦ Load balancing
◦ Minimize communication or shuffling cost (all related records, including the data within space Si and
its halo replication from bordering spaces, should easily map to a same key and be shuffled to target
reducer)
◦ What is the problem of spatial index? (disadvantages of indexing in MapReduce)
◦ Most of them are required to do iteration recursion to get a hierarchical structure that is not practical
in MapReduce. (BUT WHAT ABOUT SPARK?!)
◦ For large scale data its hierarchical index could reach one tenth of its original data size, which in huge
and hard to handle.
Proposed solution: grid file (divide the data domain in dimension i into mi portions, each of
which is considered as a mini bucket.)
23
MR-DBSCAN (2011) (Cont.)
Stage 2: Local DBSCAN :
◦ In PDBSCAN each thread could access not just its partition data but global data
during the processing of local DBSCAN algorithm. !!BAD in MapReduce!!
The local DBSCAN algorithm will only scan data and extend core points
within space Si.
24
When the cluster scan extends outside Si, assumed that a record q outside Si is directly-density-
reachable from a core point p in Si, we will not detect whether q is a core point anymore.
q will be marked as ‘On-queue’ status and put into Merge Candidates set (MC set) with core point p
as well.
MR-DBSCAN (2011) (Cont.)
Stage 3: Find Merging Mapping:
They optimized single node aggregation bottleneck in this section!
In PDBSCAN to merge the cluster from different subspaces:
◦ Collect the entire MC into a big list LL
◦ Among all the points in the list, execute a nested loop to find out whether two item with a
same point id are from different clusters.
◦ If found, merging the cluster.
25
MR-DBSCAN (2011) (Cont.)
Stage 4: Merge
Stage 4.1: Build Global Mapping:
We get several id lists of clusters to be merged for each two bordering space. (i, c1)<->(i+1, c2)
The output of this section is the mapping ((gridID, localclusterID), globalclusterID) for each local
cluster in each partition.
Stage 4.2: Merge and Relabel:
The final stage of algorithm is streaming all the local clustered records over the map-reduce
process and replacing their local cluster id with a new global cluster id (gid) based on the
mapping profile from Stage 4.1.
26
DBCURE (2014)
DBCURE utilizes ellipsoidal τ-neighborhoods instead of spherical ε-neighborhoods and has a
desirable property of being less sensitive to density parameters.
DBCURE is more suitable than OPTICS for being parallelized with MapReduce since the
ellipsoidal τ-neighborhood of each point can be determined in parallel.
User R*-tree efficiently to find the ellipsoidal τ-neighborhoods of a given point.
27
R*-treeIndexingMRGrid
Partitioning Algorithms
28
K-Means Algorithms
Its popularity can be attributed to several reasons:
1. It is conceptually simple and easy to implement.
2. It is versatile, i.e., almost every aspect of the algorithm (initialization, distance function,
termination criterion, etc.) can be modified. (This is evidenced by hundreds of
publications over the last fifty years that extend k-means in a variety of ways.)
3. It has a time complexity that is linear in N, D, and K (in general, D ≪ N and K ≪ N)
4. It has a storage complexity that is linear in N, D, and K
5. It is guaranteed to converge at a quadratic rate
6. It is invariant to data ordering, i.e., random shuffling of the data points (MapReduce
balance!)
29
K-Means Algorithms (Cont.)
k-means has several significant disadvantages:
1. It requires the number of clusters, K, to be specified in advance.
◦ Can be determined automatically by means of various internal/relative cluster validity
measures.
2. It can only detect compact, hyper spherical clusters that are well separated.
◦ Can be alleviated by using a more general distance function such as the Mahalanobis distance,
which permits the detection of hyper ellipsoidal clusters.
3. It is sensitive to noise and outlier points.
◦ Can be addressed by outlier pruning or by using a more robust distance function such as the
city-block (ℓ1) distance.
4. It often converges to a local minimum of the criterion function.
◦ For the same reason, it is highly sensitive to the selection of the initial centers
30
K-Means Algorithms (Cont.)
The Obstacles of Very Large Datasets Clustering Using K-Means:
◦ computational complexity of distance calculations;
◦ The number of iterations which significantly increases when the number of sample data increases.
Proposed idea to solve these obstacles:
◦ Solved by using MapReduce model to distribute computations
◦ Solved by using two-stages K-Means algorithm or K-Means++ algorithm
31
K-Medoids
Both K-Means and K-Medoids attempt to minimize the distance between points labeled to be
in a cluster and a point designated as the center of that cluster.
K-Medoids chooses data points as centers (medoids or exemplars) and works with an arbitrary
matrix of distances between data points instead of 𝜄2
32
PAM: Partitioning Around Medoids
1. Initialize: randomly select k of the n data points as the medoids
2. Associate each data point to the closest medoid. ("closest" here is defined using any
valid distance metric, most commonly Euclidean distance, Manhattan distance or Minkowski distance)
3. For each medoid m
For each non-medoid data point o
Swap m and o and compute the total cost of the configuration
4. Select the configuration with the lowest cost.
5. Repeat steps 2 to 4 until there is no change in the medoid
33
CLARA/CLARANS
Reduces the number of medoids' calculations through sampling.
A small portion of data is firstly selected from the whole datasets and then PAM is used to
search the cluster medoids
34
Sampling
Fast clustering using MapReduce (2011, KDD)
K-center: the goal is to choose the centers such that the maximum distance between a center and a point
assigned to it is minimized.
K-median: It is a variation of k-means clustering where instead of calculating the mean for each cluster
to determine its centroid, one instead calculates the median. (the 1-norm distance metric, as opposed to the
square of the 2-norm)
Assume that the input is a weighted complete graph G = (V;E) that has an edge xy between any two
points in V , and the weight of the edge xy is d(x; y)
First idea: Adoption of existing algorithms to MR:
◦ Partition input across machines
◦ Each machine perform computation to sparsify data
◦ Results are collected in single machine and perform computation and final solution.
Unfortunately the total running time of the algorithm can be quite large:
It runs costly clustering algorithm on Ω(𝑘 𝑛)
35
ParallelSamplingMR
Fast clustering using MapReduce (2011, KDD) (Cont.)
This algorithm uses Iterative-Sample as a subroutine:
◦ Performs the following computation in parallel across the machines:
◦ In each round, it adds a small sample of points to the final sample, it determines which points are “well
represented” by the sample, and it recursively considersonly the points that are not well represented
After a good/strong sampling, they put the sampled points on a single machine and run a clustering
algorithm on just the sampled points.
They also describe about 3 page about their mathematical proof of their good iterative sampling
algorithm.
36
PK-Means: Parallel K-Means Clustering Based on MapReduce
(2099)
Map function: Assign each sample to the closest center
Reduce function: Performs the procedure of updating the new centers.
Combiner function: Deal with partial combination of the intermediate values with the same
key within the same map task
37
MR
PK-Means: Parallel K-Means Clustering Based on MapReduce (2099) (Cont.)
38
PK-Means: Parallel K-Means Clustering Based on MapReduce (2099) (Cont.)
39
PK-Means: Parallel K-Means Clustering Based on MapReduce (2099) (Cont.)
40
FMR.K-Means: Fast K-Means Clustering for Very Large Datasets Based on
MapReduce Combined with a New Cutting Method (2015)
Presents a new approach for reducing the number of iterations of K-Means algorithm
Based on Parallel K-Means based on the MapReduce.
Propose a new method called cutting off the last iterations based on differences between
centers of each cluster of two adjacent iterations.
41
MRIterationElimination
Canopy Clustering (KDD 2000)
Canopy works with datasets that either:
◦ Having millions of data points
◦ Thousands of dimensions
◦ Thousands of clusters
Key idea: Using a cheap, approximate distance measure to efficiently divide the data into
overlapping subsets (Canopies), then clustering is performed by measuring exact distances only
between points that occur in a common canopy.
Use domain-specific features in order to design a cheap distance metric and efficiently create
canopies using the metric.
A fast distance metrics for text used by search engines are based on the inverted index.
42
ApproximationTwo-stage
Fuzzy C-Means (FCM)
Given a finite set of data, the algorithm returns a list of c cluster centers and a partition matrix,
where each element of matrix tells the degree to which element xi belongs to cluster ci.
Like the k-means algorithm, the FCM aims to minimize an objective function:
This differs from the k-means objective function by the addition of the membership values wij
and the fuzzifier m.
The fuzzifier m determines the level of cluster fuzziness.
43
K-Means + Canopy: An Integrated Clustering Framework
Using Optimized K-means with Firefly and Canopies (2015)
Proposed by integration of two meta-heuristic algorithms: Firefly algorithm and Canopy
44
ApproximationTwo-stage
K-medoids Clustering Based on MapReduce and
Optimal Search of Medoids (2014)
Proposed an improved algorithm based on MapReduce and optimal search of medoids.
According to the basic properties of triangular geometry, this paper reduced calculation of
distances among data elements to help search medoids quickly and reduce the calculation
complexity of k-medoids.
45
MROptimalSearch

More Related Content

What's hot

05 Clustering in Data Mining
05 Clustering in Data Mining05 Clustering in Data Mining
05 Clustering in Data MiningValerii Klymchuk
 
Linear models for classification
Linear models for classificationLinear models for classification
Linear models for classificationSung Yub Kim
 
Spectral clustering Tutorial
Spectral clustering TutorialSpectral clustering Tutorial
Spectral clustering TutorialZitao Liu
 
Applications of paralleL processing
Applications of paralleL processingApplications of paralleL processing
Applications of paralleL processingPage Maker
 
K means Clustering
K means ClusteringK means Clustering
K means ClusteringEdureka!
 
3.7 outlier analysis
3.7 outlier analysis3.7 outlier analysis
3.7 outlier analysisKrish_ver2
 
2.2 decision tree
2.2 decision tree2.2 decision tree
2.2 decision treeKrish_ver2
 
Deep Learning for Graphs
Deep Learning for GraphsDeep Learning for Graphs
Deep Learning for GraphsDeepLearningBlr
 
Spectral clustering
Spectral clusteringSpectral clustering
Spectral clusteringSOYEON KIM
 
Communication costs in parallel machines
Communication costs in parallel machinesCommunication costs in parallel machines
Communication costs in parallel machinesSyed Zaid Irshad
 
Chap7 2 Ecc Intro
Chap7 2 Ecc IntroChap7 2 Ecc Intro
Chap7 2 Ecc IntroEdora Aziz
 
K-Means clustring @jax
K-Means clustring @jaxK-Means clustring @jax
K-Means clustring @jaxAjay Iet
 
5.3 mining sequential patterns
5.3 mining sequential patterns5.3 mining sequential patterns
5.3 mining sequential patternsKrish_ver2
 
5.2 mining time series data
5.2 mining time series data5.2 mining time series data
5.2 mining time series dataKrish_ver2
 
3.2 partitioning methods
3.2 partitioning methods3.2 partitioning methods
3.2 partitioning methodsKrish_ver2
 
Density Based Clustering
Density Based ClusteringDensity Based Clustering
Density Based ClusteringSSA KPI
 

What's hot (20)

05 Clustering in Data Mining
05 Clustering in Data Mining05 Clustering in Data Mining
05 Clustering in Data Mining
 
Linear models for classification
Linear models for classificationLinear models for classification
Linear models for classification
 
Spectral clustering Tutorial
Spectral clustering TutorialSpectral clustering Tutorial
Spectral clustering Tutorial
 
Applications of paralleL processing
Applications of paralleL processingApplications of paralleL processing
Applications of paralleL processing
 
K means Clustering
K means ClusteringK means Clustering
K means Clustering
 
Slide05 Message Passing Architecture
Slide05 Message Passing ArchitectureSlide05 Message Passing Architecture
Slide05 Message Passing Architecture
 
3.7 outlier analysis
3.7 outlier analysis3.7 outlier analysis
3.7 outlier analysis
 
2.2 decision tree
2.2 decision tree2.2 decision tree
2.2 decision tree
 
Deep Learning for Graphs
Deep Learning for GraphsDeep Learning for Graphs
Deep Learning for Graphs
 
Spectral clustering
Spectral clusteringSpectral clustering
Spectral clustering
 
Communication costs in parallel machines
Communication costs in parallel machinesCommunication costs in parallel machines
Communication costs in parallel machines
 
Chap7 2 Ecc Intro
Chap7 2 Ecc IntroChap7 2 Ecc Intro
Chap7 2 Ecc Intro
 
K-Means clustring @jax
K-Means clustring @jaxK-Means clustring @jax
K-Means clustring @jax
 
01 Data Mining: Concepts and Techniques, 2nd ed.
01 Data Mining: Concepts and Techniques, 2nd ed.01 Data Mining: Concepts and Techniques, 2nd ed.
01 Data Mining: Concepts and Techniques, 2nd ed.
 
5.3 mining sequential patterns
5.3 mining sequential patterns5.3 mining sequential patterns
5.3 mining sequential patterns
 
DBSCAN (1) (4).pptx
DBSCAN (1) (4).pptxDBSCAN (1) (4).pptx
DBSCAN (1) (4).pptx
 
5.2 mining time series data
5.2 mining time series data5.2 mining time series data
5.2 mining time series data
 
02 Data Mining
02 Data Mining02 Data Mining
02 Data Mining
 
3.2 partitioning methods
3.2 partitioning methods3.2 partitioning methods
3.2 partitioning methods
 
Density Based Clustering
Density Based ClusteringDensity Based Clustering
Density Based Clustering
 

Similar to Big data Clustering Algorithms & Strategies Summary

An Efficient Clustering Method for Aggregation on Data Fragments
An Efficient Clustering Method for Aggregation on Data FragmentsAn Efficient Clustering Method for Aggregation on Data Fragments
An Efficient Clustering Method for Aggregation on Data FragmentsIJMER
 
Unsupervised Learning.pptx
Unsupervised Learning.pptxUnsupervised Learning.pptx
Unsupervised Learning.pptxGandhiMathy6
 
[ML]-Unsupervised-learning_Unit2.ppt.pdf
[ML]-Unsupervised-learning_Unit2.ppt.pdf[ML]-Unsupervised-learning_Unit2.ppt.pdf
[ML]-Unsupervised-learning_Unit2.ppt.pdf4NM20IS025BHUSHANNAY
 
Parallel KNN for Big Data using Adaptive Indexing
Parallel KNN for Big Data using Adaptive IndexingParallel KNN for Big Data using Adaptive Indexing
Parallel KNN for Big Data using Adaptive IndexingIRJET Journal
 
Unsupervised learning clustering
Unsupervised learning clusteringUnsupervised learning clustering
Unsupervised learning clusteringDr Nisha Arora
 
Parallel Machine Learning
Parallel Machine LearningParallel Machine Learning
Parallel Machine LearningJanani C
 
Experimental study of Data clustering using k- Means and modified algorithms
Experimental study of Data clustering using k- Means and modified algorithmsExperimental study of Data clustering using k- Means and modified algorithms
Experimental study of Data clustering using k- Means and modified algorithmsIJDKP
 
TOWARDS REDUCTION OF DATA FLOW IN A DISTRIBUTED NETWORK USING PRINCIPAL COMPO...
TOWARDS REDUCTION OF DATA FLOW IN A DISTRIBUTED NETWORK USING PRINCIPAL COMPO...TOWARDS REDUCTION OF DATA FLOW IN A DISTRIBUTED NETWORK USING PRINCIPAL COMPO...
TOWARDS REDUCTION OF DATA FLOW IN A DISTRIBUTED NETWORK USING PRINCIPAL COMPO...cscpconf
 
CLUSTERING IN DATA MINING.pdf
CLUSTERING IN DATA MINING.pdfCLUSTERING IN DATA MINING.pdf
CLUSTERING IN DATA MINING.pdfSowmyaJyothi3
 
Extended pso algorithm for improvement problems k means clustering algorithm
Extended pso algorithm for improvement problems k means clustering algorithmExtended pso algorithm for improvement problems k means clustering algorithm
Extended pso algorithm for improvement problems k means clustering algorithmIJMIT JOURNAL
 
84cc04ff77007e457df6aa2b814d2346bf1b
84cc04ff77007e457df6aa2b814d2346bf1b84cc04ff77007e457df6aa2b814d2346bf1b
84cc04ff77007e457df6aa2b814d2346bf1bPRAWEEN KUMAR
 
K Means Clustering Algorithm for Partitioning Data Sets Evaluated From Horizo...
K Means Clustering Algorithm for Partitioning Data Sets Evaluated From Horizo...K Means Clustering Algorithm for Partitioning Data Sets Evaluated From Horizo...
K Means Clustering Algorithm for Partitioning Data Sets Evaluated From Horizo...IOSR Journals
 
15857 cse422 unsupervised-learning
15857 cse422 unsupervised-learning15857 cse422 unsupervised-learning
15857 cse422 unsupervised-learningAnil Yadav
 
A Kernel Approach for Semi-Supervised Clustering Framework for High Dimension...
A Kernel Approach for Semi-Supervised Clustering Framework for High Dimension...A Kernel Approach for Semi-Supervised Clustering Framework for High Dimension...
A Kernel Approach for Semi-Supervised Clustering Framework for High Dimension...IJCSIS Research Publications
 
DECISION TREE CLUSTERING: A COLUMNSTORES TUPLE RECONSTRUCTION
DECISION TREE CLUSTERING: A COLUMNSTORES TUPLE RECONSTRUCTIONDECISION TREE CLUSTERING: A COLUMNSTORES TUPLE RECONSTRUCTION
DECISION TREE CLUSTERING: A COLUMNSTORES TUPLE RECONSTRUCTIONcscpconf
 
Extended pso algorithm for improvement problems k means clustering algorithm
Extended pso algorithm for improvement problems k means clustering algorithmExtended pso algorithm for improvement problems k means clustering algorithm
Extended pso algorithm for improvement problems k means clustering algorithmIJMIT JOURNAL
 
clustering and distance metrics.pptx
clustering and distance metrics.pptxclustering and distance metrics.pptx
clustering and distance metrics.pptxssuser2e437f
 

Similar to Big data Clustering Algorithms & Strategies Summary (20)

An Efficient Clustering Method for Aggregation on Data Fragments
An Efficient Clustering Method for Aggregation on Data FragmentsAn Efficient Clustering Method for Aggregation on Data Fragments
An Efficient Clustering Method for Aggregation on Data Fragments
 
Unsupervised Learning.pptx
Unsupervised Learning.pptxUnsupervised Learning.pptx
Unsupervised Learning.pptx
 
[ML]-Unsupervised-learning_Unit2.ppt.pdf
[ML]-Unsupervised-learning_Unit2.ppt.pdf[ML]-Unsupervised-learning_Unit2.ppt.pdf
[ML]-Unsupervised-learning_Unit2.ppt.pdf
 
Parallel KNN for Big Data using Adaptive Indexing
Parallel KNN for Big Data using Adaptive IndexingParallel KNN for Big Data using Adaptive Indexing
Parallel KNN for Big Data using Adaptive Indexing
 
Unsupervised learning clustering
Unsupervised learning clusteringUnsupervised learning clustering
Unsupervised learning clustering
 
Parallel Machine Learning
Parallel Machine LearningParallel Machine Learning
Parallel Machine Learning
 
Experimental study of Data clustering using k- Means and modified algorithms
Experimental study of Data clustering using k- Means and modified algorithmsExperimental study of Data clustering using k- Means and modified algorithms
Experimental study of Data clustering using k- Means and modified algorithms
 
TOWARDS REDUCTION OF DATA FLOW IN A DISTRIBUTED NETWORK USING PRINCIPAL COMPO...
TOWARDS REDUCTION OF DATA FLOW IN A DISTRIBUTED NETWORK USING PRINCIPAL COMPO...TOWARDS REDUCTION OF DATA FLOW IN A DISTRIBUTED NETWORK USING PRINCIPAL COMPO...
TOWARDS REDUCTION OF DATA FLOW IN A DISTRIBUTED NETWORK USING PRINCIPAL COMPO...
 
CLUSTERING IN DATA MINING.pdf
CLUSTERING IN DATA MINING.pdfCLUSTERING IN DATA MINING.pdf
CLUSTERING IN DATA MINING.pdf
 
Extended pso algorithm for improvement problems k means clustering algorithm
Extended pso algorithm for improvement problems k means clustering algorithmExtended pso algorithm for improvement problems k means clustering algorithm
Extended pso algorithm for improvement problems k means clustering algorithm
 
M5.pptx
M5.pptxM5.pptx
M5.pptx
 
84cc04ff77007e457df6aa2b814d2346bf1b
84cc04ff77007e457df6aa2b814d2346bf1b84cc04ff77007e457df6aa2b814d2346bf1b
84cc04ff77007e457df6aa2b814d2346bf1b
 
F04463437
F04463437F04463437
F04463437
 
50120140505013
5012014050501350120140505013
50120140505013
 
K Means Clustering Algorithm for Partitioning Data Sets Evaluated From Horizo...
K Means Clustering Algorithm for Partitioning Data Sets Evaluated From Horizo...K Means Clustering Algorithm for Partitioning Data Sets Evaluated From Horizo...
K Means Clustering Algorithm for Partitioning Data Sets Evaluated From Horizo...
 
15857 cse422 unsupervised-learning
15857 cse422 unsupervised-learning15857 cse422 unsupervised-learning
15857 cse422 unsupervised-learning
 
A Kernel Approach for Semi-Supervised Clustering Framework for High Dimension...
A Kernel Approach for Semi-Supervised Clustering Framework for High Dimension...A Kernel Approach for Semi-Supervised Clustering Framework for High Dimension...
A Kernel Approach for Semi-Supervised Clustering Framework for High Dimension...
 
DECISION TREE CLUSTERING: A COLUMNSTORES TUPLE RECONSTRUCTION
DECISION TREE CLUSTERING: A COLUMNSTORES TUPLE RECONSTRUCTIONDECISION TREE CLUSTERING: A COLUMNSTORES TUPLE RECONSTRUCTION
DECISION TREE CLUSTERING: A COLUMNSTORES TUPLE RECONSTRUCTION
 
Extended pso algorithm for improvement problems k means clustering algorithm
Extended pso algorithm for improvement problems k means clustering algorithmExtended pso algorithm for improvement problems k means clustering algorithm
Extended pso algorithm for improvement problems k means clustering algorithm
 
clustering and distance metrics.pptx
clustering and distance metrics.pptxclustering and distance metrics.pptx
clustering and distance metrics.pptx
 

More from Farzad Nozarian

SHARE Interface in Flash Storage for Relational and NoSQL Databases
SHARE Interface in Flash Storage for Relational and NoSQL DatabasesSHARE Interface in Flash Storage for Relational and NoSQL Databases
SHARE Interface in Flash Storage for Relational and NoSQL DatabasesFarzad Nozarian
 
Ultimate Goals In Robotics
Ultimate Goals In RoboticsUltimate Goals In Robotics
Ultimate Goals In RoboticsFarzad Nozarian
 
Tank Battle - A simple game powered by JMonkey engine
Tank Battle - A simple game powered by JMonkey engineTank Battle - A simple game powered by JMonkey engine
Tank Battle - A simple game powered by JMonkey engineFarzad Nozarian
 
The Continuous Distributed Monitoring Model
The Continuous Distributed Monitoring ModelThe Continuous Distributed Monitoring Model
The Continuous Distributed Monitoring ModelFarzad Nozarian
 
Apache HBase - Lab Assignment
Apache HBase - Lab AssignmentApache HBase - Lab Assignment
Apache HBase - Lab AssignmentFarzad Nozarian
 
Apache HDFS - Lab Assignment
Apache HDFS - Lab AssignmentApache HDFS - Lab Assignment
Apache HDFS - Lab AssignmentFarzad Nozarian
 
Apache Hadoop MapReduce Tutorial
Apache Hadoop MapReduce TutorialApache Hadoop MapReduce Tutorial
Apache Hadoop MapReduce TutorialFarzad Nozarian
 
Big Data and Cloud Computing
Big Data and Cloud ComputingBig Data and Cloud Computing
Big Data and Cloud ComputingFarzad Nozarian
 
Big Data Processing in Cloud Computing Environments
Big Data Processing in Cloud Computing EnvironmentsBig Data Processing in Cloud Computing Environments
Big Data Processing in Cloud Computing EnvironmentsFarzad Nozarian
 
S4: Distributed Stream Computing Platform
S4: Distributed Stream Computing PlatformS4: Distributed Stream Computing Platform
S4: Distributed Stream Computing PlatformFarzad Nozarian
 

More from Farzad Nozarian (14)

SHARE Interface in Flash Storage for Relational and NoSQL Databases
SHARE Interface in Flash Storage for Relational and NoSQL DatabasesSHARE Interface in Flash Storage for Relational and NoSQL Databases
SHARE Interface in Flash Storage for Relational and NoSQL Databases
 
Object Based Databases
Object Based DatabasesObject Based Databases
Object Based Databases
 
Ultimate Goals In Robotics
Ultimate Goals In RoboticsUltimate Goals In Robotics
Ultimate Goals In Robotics
 
Tank Battle - A simple game powered by JMonkey engine
Tank Battle - A simple game powered by JMonkey engineTank Battle - A simple game powered by JMonkey engine
Tank Battle - A simple game powered by JMonkey engine
 
The Continuous Distributed Monitoring Model
The Continuous Distributed Monitoring ModelThe Continuous Distributed Monitoring Model
The Continuous Distributed Monitoring Model
 
Shark - Lab Assignment
Shark - Lab AssignmentShark - Lab Assignment
Shark - Lab Assignment
 
Apache HBase - Lab Assignment
Apache HBase - Lab AssignmentApache HBase - Lab Assignment
Apache HBase - Lab Assignment
 
Apache HDFS - Lab Assignment
Apache HDFS - Lab AssignmentApache HDFS - Lab Assignment
Apache HDFS - Lab Assignment
 
Apache Hadoop MapReduce Tutorial
Apache Hadoop MapReduce TutorialApache Hadoop MapReduce Tutorial
Apache Hadoop MapReduce Tutorial
 
Apache Spark Tutorial
Apache Spark TutorialApache Spark Tutorial
Apache Spark Tutorial
 
Apache Storm Tutorial
Apache Storm TutorialApache Storm Tutorial
Apache Storm Tutorial
 
Big Data and Cloud Computing
Big Data and Cloud ComputingBig Data and Cloud Computing
Big Data and Cloud Computing
 
Big Data Processing in Cloud Computing Environments
Big Data Processing in Cloud Computing EnvironmentsBig Data Processing in Cloud Computing Environments
Big Data Processing in Cloud Computing Environments
 
S4: Distributed Stream Computing Platform
S4: Distributed Stream Computing PlatformS4: Distributed Stream Computing Platform
S4: Distributed Stream Computing Platform
 

Recently uploaded

Real-time Tracking and Monitoring with Cargo Cloud Solutions.pptx
Real-time Tracking and Monitoring with Cargo Cloud Solutions.pptxReal-time Tracking and Monitoring with Cargo Cloud Solutions.pptx
Real-time Tracking and Monitoring with Cargo Cloud Solutions.pptxRTS corp
 
Revolutionizing the Digital Transformation Office - Leveraging OnePlan’s AI a...
Revolutionizing the Digital Transformation Office - Leveraging OnePlan’s AI a...Revolutionizing the Digital Transformation Office - Leveraging OnePlan’s AI a...
Revolutionizing the Digital Transformation Office - Leveraging OnePlan’s AI a...OnePlan Solutions
 
Tech Tuesday - Mastering Time Management Unlock the Power of OnePlan's Timesh...
Tech Tuesday - Mastering Time Management Unlock the Power of OnePlan's Timesh...Tech Tuesday - Mastering Time Management Unlock the Power of OnePlan's Timesh...
Tech Tuesday - Mastering Time Management Unlock the Power of OnePlan's Timesh...OnePlan Solutions
 
Enhancing Supply Chain Visibility with Cargo Cloud Solutions.pdf
Enhancing Supply Chain Visibility with Cargo Cloud Solutions.pdfEnhancing Supply Chain Visibility with Cargo Cloud Solutions.pdf
Enhancing Supply Chain Visibility with Cargo Cloud Solutions.pdfRTS corp
 
Precise and Complete Requirements? An Elusive Goal
Precise and Complete Requirements? An Elusive GoalPrecise and Complete Requirements? An Elusive Goal
Precise and Complete Requirements? An Elusive GoalLionel Briand
 
Strategies for using alternative queries to mitigate zero results
Strategies for using alternative queries to mitigate zero resultsStrategies for using alternative queries to mitigate zero results
Strategies for using alternative queries to mitigate zero resultsJean Silva
 
Post Quantum Cryptography – The Impact on Identity
Post Quantum Cryptography – The Impact on IdentityPost Quantum Cryptography – The Impact on Identity
Post Quantum Cryptography – The Impact on Identityteam-WIBU
 
JavaLand 2024 - Going serverless with Quarkus GraalVM native images and AWS L...
JavaLand 2024 - Going serverless with Quarkus GraalVM native images and AWS L...JavaLand 2024 - Going serverless with Quarkus GraalVM native images and AWS L...
JavaLand 2024 - Going serverless with Quarkus GraalVM native images and AWS L...Bert Jan Schrijver
 
Alfresco TTL#157 - Troubleshooting Made Easy: Deciphering Alfresco mTLS Confi...
Alfresco TTL#157 - Troubleshooting Made Easy: Deciphering Alfresco mTLS Confi...Alfresco TTL#157 - Troubleshooting Made Easy: Deciphering Alfresco mTLS Confi...
Alfresco TTL#157 - Troubleshooting Made Easy: Deciphering Alfresco mTLS Confi...Angel Borroy López
 
OpenChain AI Study Group - Europe and Asia Recap - 2024-04-11 - Full Recording
OpenChain AI Study Group - Europe and Asia Recap - 2024-04-11 - Full RecordingOpenChain AI Study Group - Europe and Asia Recap - 2024-04-11 - Full Recording
OpenChain AI Study Group - Europe and Asia Recap - 2024-04-11 - Full RecordingShane Coughlan
 
Powering Real-Time Decisions with Continuous Data Streams
Powering Real-Time Decisions with Continuous Data StreamsPowering Real-Time Decisions with Continuous Data Streams
Powering Real-Time Decisions with Continuous Data StreamsSafe Software
 
Sending Calendar Invites on SES and Calendarsnack.pdf
Sending Calendar Invites on SES and Calendarsnack.pdfSending Calendar Invites on SES and Calendarsnack.pdf
Sending Calendar Invites on SES and Calendarsnack.pdf31events.com
 
Best Angular 17 Classroom & Online training - Naresh IT
Best Angular 17 Classroom & Online training - Naresh ITBest Angular 17 Classroom & Online training - Naresh IT
Best Angular 17 Classroom & Online training - Naresh ITmanoharjgpsolutions
 
What’s New in VictoriaMetrics: Q1 2024 Updates
What’s New in VictoriaMetrics: Q1 2024 UpdatesWhat’s New in VictoriaMetrics: Q1 2024 Updates
What’s New in VictoriaMetrics: Q1 2024 UpdatesVictoriaMetrics
 
Large Language Models for Test Case Evolution and Repair
Large Language Models for Test Case Evolution and RepairLarge Language Models for Test Case Evolution and Repair
Large Language Models for Test Case Evolution and RepairLionel Briand
 
Exploring Selenium_Appium Frameworks for Seamless Integration with HeadSpin.pdf
Exploring Selenium_Appium Frameworks for Seamless Integration with HeadSpin.pdfExploring Selenium_Appium Frameworks for Seamless Integration with HeadSpin.pdf
Exploring Selenium_Appium Frameworks for Seamless Integration with HeadSpin.pdfkalichargn70th171
 
Introduction to Firebase Workshop Slides
Introduction to Firebase Workshop SlidesIntroduction to Firebase Workshop Slides
Introduction to Firebase Workshop Slidesvaideheekore1
 
Odoo 14 - eLearning Module In Odoo 14 Enterprise
Odoo 14 - eLearning Module In Odoo 14 EnterpriseOdoo 14 - eLearning Module In Odoo 14 Enterprise
Odoo 14 - eLearning Module In Odoo 14 Enterprisepreethippts
 
Leveraging AI for Mobile App Testing on Real Devices | Applitools + Kobiton
Leveraging AI for Mobile App Testing on Real Devices | Applitools + KobitonLeveraging AI for Mobile App Testing on Real Devices | Applitools + Kobiton
Leveraging AI for Mobile App Testing on Real Devices | Applitools + KobitonApplitools
 
Simplifying Microservices & Apps - The art of effortless development - Meetup...
Simplifying Microservices & Apps - The art of effortless development - Meetup...Simplifying Microservices & Apps - The art of effortless development - Meetup...
Simplifying Microservices & Apps - The art of effortless development - Meetup...Rob Geurden
 

Recently uploaded (20)

Real-time Tracking and Monitoring with Cargo Cloud Solutions.pptx
Real-time Tracking and Monitoring with Cargo Cloud Solutions.pptxReal-time Tracking and Monitoring with Cargo Cloud Solutions.pptx
Real-time Tracking and Monitoring with Cargo Cloud Solutions.pptx
 
Revolutionizing the Digital Transformation Office - Leveraging OnePlan’s AI a...
Revolutionizing the Digital Transformation Office - Leveraging OnePlan’s AI a...Revolutionizing the Digital Transformation Office - Leveraging OnePlan’s AI a...
Revolutionizing the Digital Transformation Office - Leveraging OnePlan’s AI a...
 
Tech Tuesday - Mastering Time Management Unlock the Power of OnePlan's Timesh...
Tech Tuesday - Mastering Time Management Unlock the Power of OnePlan's Timesh...Tech Tuesday - Mastering Time Management Unlock the Power of OnePlan's Timesh...
Tech Tuesday - Mastering Time Management Unlock the Power of OnePlan's Timesh...
 
Enhancing Supply Chain Visibility with Cargo Cloud Solutions.pdf
Enhancing Supply Chain Visibility with Cargo Cloud Solutions.pdfEnhancing Supply Chain Visibility with Cargo Cloud Solutions.pdf
Enhancing Supply Chain Visibility with Cargo Cloud Solutions.pdf
 
Precise and Complete Requirements? An Elusive Goal
Precise and Complete Requirements? An Elusive GoalPrecise and Complete Requirements? An Elusive Goal
Precise and Complete Requirements? An Elusive Goal
 
Strategies for using alternative queries to mitigate zero results
Strategies for using alternative queries to mitigate zero resultsStrategies for using alternative queries to mitigate zero results
Strategies for using alternative queries to mitigate zero results
 
Post Quantum Cryptography – The Impact on Identity
Post Quantum Cryptography – The Impact on IdentityPost Quantum Cryptography – The Impact on Identity
Post Quantum Cryptography – The Impact on Identity
 
JavaLand 2024 - Going serverless with Quarkus GraalVM native images and AWS L...
JavaLand 2024 - Going serverless with Quarkus GraalVM native images and AWS L...JavaLand 2024 - Going serverless with Quarkus GraalVM native images and AWS L...
JavaLand 2024 - Going serverless with Quarkus GraalVM native images and AWS L...
 
Alfresco TTL#157 - Troubleshooting Made Easy: Deciphering Alfresco mTLS Confi...
Alfresco TTL#157 - Troubleshooting Made Easy: Deciphering Alfresco mTLS Confi...Alfresco TTL#157 - Troubleshooting Made Easy: Deciphering Alfresco mTLS Confi...
Alfresco TTL#157 - Troubleshooting Made Easy: Deciphering Alfresco mTLS Confi...
 
OpenChain AI Study Group - Europe and Asia Recap - 2024-04-11 - Full Recording
OpenChain AI Study Group - Europe and Asia Recap - 2024-04-11 - Full RecordingOpenChain AI Study Group - Europe and Asia Recap - 2024-04-11 - Full Recording
OpenChain AI Study Group - Europe and Asia Recap - 2024-04-11 - Full Recording
 
Powering Real-Time Decisions with Continuous Data Streams
Powering Real-Time Decisions with Continuous Data StreamsPowering Real-Time Decisions with Continuous Data Streams
Powering Real-Time Decisions with Continuous Data Streams
 
Sending Calendar Invites on SES and Calendarsnack.pdf
Sending Calendar Invites on SES and Calendarsnack.pdfSending Calendar Invites on SES and Calendarsnack.pdf
Sending Calendar Invites on SES and Calendarsnack.pdf
 
Best Angular 17 Classroom & Online training - Naresh IT
Best Angular 17 Classroom & Online training - Naresh ITBest Angular 17 Classroom & Online training - Naresh IT
Best Angular 17 Classroom & Online training - Naresh IT
 
What’s New in VictoriaMetrics: Q1 2024 Updates
What’s New in VictoriaMetrics: Q1 2024 UpdatesWhat’s New in VictoriaMetrics: Q1 2024 Updates
What’s New in VictoriaMetrics: Q1 2024 Updates
 
Large Language Models for Test Case Evolution and Repair
Large Language Models for Test Case Evolution and RepairLarge Language Models for Test Case Evolution and Repair
Large Language Models for Test Case Evolution and Repair
 
Exploring Selenium_Appium Frameworks for Seamless Integration with HeadSpin.pdf
Exploring Selenium_Appium Frameworks for Seamless Integration with HeadSpin.pdfExploring Selenium_Appium Frameworks for Seamless Integration with HeadSpin.pdf
Exploring Selenium_Appium Frameworks for Seamless Integration with HeadSpin.pdf
 
Introduction to Firebase Workshop Slides
Introduction to Firebase Workshop SlidesIntroduction to Firebase Workshop Slides
Introduction to Firebase Workshop Slides
 
Odoo 14 - eLearning Module In Odoo 14 Enterprise
Odoo 14 - eLearning Module In Odoo 14 EnterpriseOdoo 14 - eLearning Module In Odoo 14 Enterprise
Odoo 14 - eLearning Module In Odoo 14 Enterprise
 
Leveraging AI for Mobile App Testing on Real Devices | Applitools + Kobiton
Leveraging AI for Mobile App Testing on Real Devices | Applitools + KobitonLeveraging AI for Mobile App Testing on Real Devices | Applitools + Kobiton
Leveraging AI for Mobile App Testing on Real Devices | Applitools + Kobiton
 
Simplifying Microservices & Apps - The art of effortless development - Meetup...
Simplifying Microservices & Apps - The art of effortless development - Meetup...Simplifying Microservices & Apps - The art of effortless development - Meetup...
Simplifying Microservices & Apps - The art of effortless development - Meetup...
 

Big data Clustering Algorithms & Strategies Summary

  • 1. Big data Clustering Algorithms & Strategies FARZAD NOZARIAN AMIRKABIR UNIVERSITY OF TECHNOLOGY – MARCH 2015 1
  • 2. Preprocessing Goals: 1. To assure the quality of the data by reducing the noisy and irrelevant information that it could contain 2. To reduce the size of the dataset, so the computational cost of the discovery task is also reduced. Reducing the size of dataset: ◦ Number of instances ◦ addressed by sampling (the sampled dataset should holds the same information that the whole dataset) ◦ Dimensionality reduction ◦ Feature selection ◦ Feature extraction 2
  • 3. Clustering algorithms Hierarchical methods ◦ Divisive ◦ Agglomerative Based on similarity matrix for each pair of examples Some algorithm consider this matrix as Graph; Other algorithm reduce the matrix each iteration by merging two groups. The main drawback of these algorithms is their computational cost. (o(n2)) Scanning the dataset many times! 3
  • 4. Prototype/model based clustering Prototype and model based clustering assume that clusters fit to a specific shape. Goal: Discover how different numbers of these shapes can explain the spatial distribution of the data. Must used prototype based clustering is K-Means. ◦ K-Means assumes that clusters are defined by their center (the prototype) and have spherical shapes. ◦ To feet this shape K-Means minimizing the distances from the examples to these centers. ◦ solved iteratively using a gradient descent algorithm. 4
  • 5. Density based clustering DBSCAN OPTICS is an extension of the original DBSCAN that uses heuristics to find good values for DBSCAN parameters. The main drawback of this methods comes from the cost of finding the nearest neighbors for an example. Indexing is a solution, but may be degraded with the number of dimensions to a linear search. 5
  • 6. Grid based clustering The basic idea: divide the space of instances in hyperrectangular cells by discretizing the attributes of the dataset. Clusters of arbitrary shapes. Each cell is summarized by the sufficient statistics of the examples it contains. Usually scale well, but it depends on the granularity of the discretization of the space of examples. The strategies used to prune the search space allow to largely reduce the computational cost 6
  • 7. Scalability strategies One-pass strategies Summarization strategies Sampling/batch strategies Approximation strategies Divide and conquer strategies 7
  • 8. One-pass strategies Reduce the number of scans of the data to only one. This constraint may be usually forced by the circumstance that the dataset can not fit in memory and it has to be obtained from disk. This is used to perform a preprocess of the dataset. This results in two stages algorithms, a first one that applies the one-pass strategy and a second one that process in memory a summary of the data obtained by the first stage. 8
  • 9. Summarization Strategies Purpose: obtain a coarse approximation of the data without losing the information that represent the different densities of examples. Sufficient statistics like mean and variance. The summarization can be performed single level, as a preprocess that is feed to a cluster algorithm. 9
  • 10. Sampling/batch strategies Purpose: Allow to perform the processing in main memory for a part of the dataset. In case of more than one sample of the data: The algorithm should be able to process raw data and cluster summaries. They scale on the size of the sampling and not on the size of the whole dataset. The use of batches assume that the data can be processed sequentially and that after applying a clustering algorithm to a batch, the result can be merged with the results from previous batches. Data stream! 10
  • 11. Approximation strategies These strategies assume that some computations can be saved or approximated with reduced or null impact on the final result. Algorithm dependent. Most costly part of clustering algorithms corresponds to distance computation among instances or among instances and prototypes. E.g, some of these algorithms are iterative and the decision about what partition is assigned to an example does not change after a few iterations. If this can be determined at an early stage, all these distance computations can be avoided in successive iterations. This strategy is usually combined with a summarization strategy where groups of examples are reduced to a point that is used to decide if the decision can be performed using only that point or the distances to all the examples have to be computed. 11
  • 12. Divide and conquer strategies Data can be divided in multiple independent datasets and that the clustering results can be then merged on a final model. 12
  • 14. PINK: A Scalable Algorithm for Single-Linkage Hierarchical Clustering on Distributed-Memory Architectures (2013) (northwestern) A scalable parallel algorithm for single-linkage hierarchical clustering based on decomposing a problem instance into two different types of subproblems. As PINK does not explicitly store a distance matrix, it can be applied to much larger problem sizes. Algorithm: ◦ Divide a large hierarchical clustering problem instance into a set of smaller sub-problems ◦ Calculate the hierarchical clustering dendrogram for each of these sub-problems ◦ Reconstruct the solution for the original dataset by combining the solutions to the sub-problems. 14
  • 15. Leader-single-link (l-SL): A distance based clustering method for arbitrary shaped clusters in large datasets (2011) Divides the clustering process in two steps: ◦ One pass clustering algorithm: resulting in a set of cluster summaries that reduce the size of the dataset. ◦ This new dataset fits in memory and can be processed using a single link hierarchical clustering algorithm. Leaders clustering method: is a single data-scan distance based partitional clustering method. For a given threshold distance τ, it produces a set of leaders L incrementally. For each pattern 𝑥, if there is a leader 𝑙 ϵ L such that 𝑥 − 𝑙 ≤ 𝜏, then 𝑥 is assigned to a cluster represented by 𝑙. If there is no such leader, then 𝑥 becomes a new leader. 15 One-passSummarization
  • 16. Leader-single-link (l-SL) (cont.) The k-means also is a leader algorithm but it is applicable to numerical dataset only and scans dataset more than once before convergence. After producing the leaders, the leaders set is further clustered using SL method with cut-off distance ℎ which results in clustering of leaders. Finally, each leader is replaced by its followers to produce final clustering. 16
  • 18. PDBSCAN 1. Divide the input into several partitions, and distribute these partitions to the available computers 2. Cluster partitions concurrently using DBSCAN 3. Combine or merge the clustering's of the partitions into a clustering of the whole database. 4. In distributed environment we should care about data placement: ◦ Load balancing: the partitions should be almost of equal size if we assume that all computers have the same performance ◦ Minimized communication cost: should avoid accessing those data located on any of the other computers ◦ Distributed data access: This is not applicable for MR! 18 DivideandconquerdR*-tree
  • 19. PDBSCAN (Cont.) Algorithm is based on the R*-tree, provides not only a spatial data placement strategy for clustering, but also efficient access to spatial data in a shared nothing architecture through the replication of indices. Proposed data placement solution: grouping the MBRs of leaf nodes of the R*-tree into N partitions such that the nearby MBRs should be assigned to the same partition and the partitions should be almost of equal size with respect to the number of MBRs. How this solution can be achieved? use space filling Hilbert curves For a given R*-tree, this method works as follows: ◦ Every data page of the R*-tree is assigned to a Hilbert value according to its center of gravity. So, successive data pages will be close in space. ◦ Sort the list of pairs by ascending Hilbert values. ◦ If the R*-tree has d data pages and we have n slaves, every slave obtains d/n data pages of the sorted list 19
  • 20. PDBSCAN (Cont.) Proposed efficient access to the distributed data solution: replicate the directory of the R*-tree on all available computers (dR*-tree) Now the PDBSCAN algorithm: ◦ Starts with an arbitrary point p within S and retrieves all points which are density-reachable from p ◦ If p is not a core point, no points are density-reachable from p: visits the next point in partition S ◦ If all members of C are contained in S: C is also a cluster ◦ If there are members of C outside of S: C may need to be merged with another cluster found call C a merging candidate 20
  • 21. PDBSCAN (Cont.) The master PDBSCAN receives a list of merging candidates from every SLAVE. PDBSCAN collects all the lists L it receives and assigns them to a list LL. A merging function is noting else a nested loop that check for each pair of cluster if their intersection aren’t empty! 21
  • 22. MR-DBSCAN (2011) Implement it by a 4-stages MapReduce paradigm. Contributions: quick partitioning strategy for large scale non-indexed data. Challenges of designing DBSCAN in MapReduce: ◦ Data interchange mechanism is limited. Data transferring between map and reduce is not encouraged. ◦ MapReduce doesn’t provide any mechanism such as R-tree, KD-tree to improve multidimensional search. ◦ Maximum parallelism can be achieved when the data is well balanced. PDBSCAN was has been the basis of their work. However it aggregate intermediate results in a single node, and MR-DBSCAN optimize this issue. 22 GridMR
  • 23. MR-DBSCAN (2011) (Cont.) Stage 1: Preprocessing: ◦ Main challenges for a partitioning strategy are: ◦ Load balancing ◦ Minimize communication or shuffling cost (all related records, including the data within space Si and its halo replication from bordering spaces, should easily map to a same key and be shuffled to target reducer) ◦ What is the problem of spatial index? (disadvantages of indexing in MapReduce) ◦ Most of them are required to do iteration recursion to get a hierarchical structure that is not practical in MapReduce. (BUT WHAT ABOUT SPARK?!) ◦ For large scale data its hierarchical index could reach one tenth of its original data size, which in huge and hard to handle. Proposed solution: grid file (divide the data domain in dimension i into mi portions, each of which is considered as a mini bucket.) 23
  • 24. MR-DBSCAN (2011) (Cont.) Stage 2: Local DBSCAN : ◦ In PDBSCAN each thread could access not just its partition data but global data during the processing of local DBSCAN algorithm. !!BAD in MapReduce!! The local DBSCAN algorithm will only scan data and extend core points within space Si. 24 When the cluster scan extends outside Si, assumed that a record q outside Si is directly-density- reachable from a core point p in Si, we will not detect whether q is a core point anymore. q will be marked as ‘On-queue’ status and put into Merge Candidates set (MC set) with core point p as well.
  • 25. MR-DBSCAN (2011) (Cont.) Stage 3: Find Merging Mapping: They optimized single node aggregation bottleneck in this section! In PDBSCAN to merge the cluster from different subspaces: ◦ Collect the entire MC into a big list LL ◦ Among all the points in the list, execute a nested loop to find out whether two item with a same point id are from different clusters. ◦ If found, merging the cluster. 25
  • 26. MR-DBSCAN (2011) (Cont.) Stage 4: Merge Stage 4.1: Build Global Mapping: We get several id lists of clusters to be merged for each two bordering space. (i, c1)<->(i+1, c2) The output of this section is the mapping ((gridID, localclusterID), globalclusterID) for each local cluster in each partition. Stage 4.2: Merge and Relabel: The final stage of algorithm is streaming all the local clustered records over the map-reduce process and replacing their local cluster id with a new global cluster id (gid) based on the mapping profile from Stage 4.1. 26
  • 27. DBCURE (2014) DBCURE utilizes ellipsoidal τ-neighborhoods instead of spherical ε-neighborhoods and has a desirable property of being less sensitive to density parameters. DBCURE is more suitable than OPTICS for being parallelized with MapReduce since the ellipsoidal τ-neighborhood of each point can be determined in parallel. User R*-tree efficiently to find the ellipsoidal τ-neighborhoods of a given point. 27 R*-treeIndexingMRGrid
  • 29. K-Means Algorithms Its popularity can be attributed to several reasons: 1. It is conceptually simple and easy to implement. 2. It is versatile, i.e., almost every aspect of the algorithm (initialization, distance function, termination criterion, etc.) can be modified. (This is evidenced by hundreds of publications over the last fifty years that extend k-means in a variety of ways.) 3. It has a time complexity that is linear in N, D, and K (in general, D ≪ N and K ≪ N) 4. It has a storage complexity that is linear in N, D, and K 5. It is guaranteed to converge at a quadratic rate 6. It is invariant to data ordering, i.e., random shuffling of the data points (MapReduce balance!) 29
  • 30. K-Means Algorithms (Cont.) k-means has several significant disadvantages: 1. It requires the number of clusters, K, to be specified in advance. ◦ Can be determined automatically by means of various internal/relative cluster validity measures. 2. It can only detect compact, hyper spherical clusters that are well separated. ◦ Can be alleviated by using a more general distance function such as the Mahalanobis distance, which permits the detection of hyper ellipsoidal clusters. 3. It is sensitive to noise and outlier points. ◦ Can be addressed by outlier pruning or by using a more robust distance function such as the city-block (ℓ1) distance. 4. It often converges to a local minimum of the criterion function. ◦ For the same reason, it is highly sensitive to the selection of the initial centers 30
  • 31. K-Means Algorithms (Cont.) The Obstacles of Very Large Datasets Clustering Using K-Means: ◦ computational complexity of distance calculations; ◦ The number of iterations which significantly increases when the number of sample data increases. Proposed idea to solve these obstacles: ◦ Solved by using MapReduce model to distribute computations ◦ Solved by using two-stages K-Means algorithm or K-Means++ algorithm 31
  • 32. K-Medoids Both K-Means and K-Medoids attempt to minimize the distance between points labeled to be in a cluster and a point designated as the center of that cluster. K-Medoids chooses data points as centers (medoids or exemplars) and works with an arbitrary matrix of distances between data points instead of 𝜄2 32
  • 33. PAM: Partitioning Around Medoids 1. Initialize: randomly select k of the n data points as the medoids 2. Associate each data point to the closest medoid. ("closest" here is defined using any valid distance metric, most commonly Euclidean distance, Manhattan distance or Minkowski distance) 3. For each medoid m For each non-medoid data point o Swap m and o and compute the total cost of the configuration 4. Select the configuration with the lowest cost. 5. Repeat steps 2 to 4 until there is no change in the medoid 33
  • 34. CLARA/CLARANS Reduces the number of medoids' calculations through sampling. A small portion of data is firstly selected from the whole datasets and then PAM is used to search the cluster medoids 34 Sampling
  • 35. Fast clustering using MapReduce (2011, KDD) K-center: the goal is to choose the centers such that the maximum distance between a center and a point assigned to it is minimized. K-median: It is a variation of k-means clustering where instead of calculating the mean for each cluster to determine its centroid, one instead calculates the median. (the 1-norm distance metric, as opposed to the square of the 2-norm) Assume that the input is a weighted complete graph G = (V;E) that has an edge xy between any two points in V , and the weight of the edge xy is d(x; y) First idea: Adoption of existing algorithms to MR: ◦ Partition input across machines ◦ Each machine perform computation to sparsify data ◦ Results are collected in single machine and perform computation and final solution. Unfortunately the total running time of the algorithm can be quite large: It runs costly clustering algorithm on Ω(𝑘 𝑛) 35 ParallelSamplingMR
  • 36. Fast clustering using MapReduce (2011, KDD) (Cont.) This algorithm uses Iterative-Sample as a subroutine: ◦ Performs the following computation in parallel across the machines: ◦ In each round, it adds a small sample of points to the final sample, it determines which points are “well represented” by the sample, and it recursively considersonly the points that are not well represented After a good/strong sampling, they put the sampled points on a single machine and run a clustering algorithm on just the sampled points. They also describe about 3 page about their mathematical proof of their good iterative sampling algorithm. 36
  • 37. PK-Means: Parallel K-Means Clustering Based on MapReduce (2099) Map function: Assign each sample to the closest center Reduce function: Performs the procedure of updating the new centers. Combiner function: Deal with partial combination of the intermediate values with the same key within the same map task 37 MR
  • 38. PK-Means: Parallel K-Means Clustering Based on MapReduce (2099) (Cont.) 38
  • 39. PK-Means: Parallel K-Means Clustering Based on MapReduce (2099) (Cont.) 39
  • 40. PK-Means: Parallel K-Means Clustering Based on MapReduce (2099) (Cont.) 40
  • 41. FMR.K-Means: Fast K-Means Clustering for Very Large Datasets Based on MapReduce Combined with a New Cutting Method (2015) Presents a new approach for reducing the number of iterations of K-Means algorithm Based on Parallel K-Means based on the MapReduce. Propose a new method called cutting off the last iterations based on differences between centers of each cluster of two adjacent iterations. 41 MRIterationElimination
  • 42. Canopy Clustering (KDD 2000) Canopy works with datasets that either: ◦ Having millions of data points ◦ Thousands of dimensions ◦ Thousands of clusters Key idea: Using a cheap, approximate distance measure to efficiently divide the data into overlapping subsets (Canopies), then clustering is performed by measuring exact distances only between points that occur in a common canopy. Use domain-specific features in order to design a cheap distance metric and efficiently create canopies using the metric. A fast distance metrics for text used by search engines are based on the inverted index. 42 ApproximationTwo-stage
  • 43. Fuzzy C-Means (FCM) Given a finite set of data, the algorithm returns a list of c cluster centers and a partition matrix, where each element of matrix tells the degree to which element xi belongs to cluster ci. Like the k-means algorithm, the FCM aims to minimize an objective function: This differs from the k-means objective function by the addition of the membership values wij and the fuzzifier m. The fuzzifier m determines the level of cluster fuzziness. 43
  • 44. K-Means + Canopy: An Integrated Clustering Framework Using Optimized K-means with Firefly and Canopies (2015) Proposed by integration of two meta-heuristic algorithms: Firefly algorithm and Canopy 44 ApproximationTwo-stage
  • 45. K-medoids Clustering Based on MapReduce and Optimal Search of Medoids (2014) Proposed an improved algorithm based on MapReduce and optimal search of medoids. According to the basic properties of triangular geometry, this paper reduced calculation of distances among data elements to help search medoids quickly and reduce the calculation complexity of k-medoids. 45 MROptimalSearch