SlideShare a Scribd company logo
1 of 88
Download to read offline
Prof. Pier Luca Lanzi
Graph Mining
Data Mining andText Mining (UIC 583 @ Politecnico di Milano)
Prof. Pier Luca Lanzi
References
• Jure Leskovec, Anand Rajaraman, Jeff Ullman. Mining of Massive
Datasets, Chapter 5 & Chapter 10
• Book and slides are available from http://www.mmds.org
2
Prof. Pier Luca Lanzi
Facebook social graph
4-degrees of separation [Backstrom-Boldi-Rosa-Ugander-Vigna, 2011]
Prof. Pier Luca Lanzi
Connections between political blogs
Polarization of the network [Adamic-Glance, 2005]
Prof. Pier Luca Lanzi
Citation networks and Maps of science
[Börner et al., 2012]
Prof. Pier Luca Lanzi
Web as a graph: pages are nodes, edges are links
Prof. Pier Luca Lanzi
How is the Web Organized?
• Initial approaches
§Human curated Web directories
§Yahoo, DMOZ, LookSmart
• Then, Web search
§Information Retrieval investigates:
Find relevant docs in a small
and trusted set
§Newspaper articles, Patents, etc.
8
Web is huge, full of untrusted documents,
random things, web spam, etc.
Prof. Pier Luca Lanzi
Web Search Challenges
• Web contains many sources of information
§Who should we “trust”?
§Trick: Trustworthy pages may point to each other!
• What is the “best” answer to query “newspaper”?
§No single right answer
§Trick: Pages that actually know about newspapers
might all be pointing to many newspapers
9
Prof. Pier Luca Lanzi
Page Rank
10
Prof. Pier Luca Lanzi
https://www.youtube.com/watch?v=wPMZr9RDmVk
Prof. Pier Luca Lanzi
Page Rank Algorithm
• The underlying idea is to look at links as votes
• A page is more important if it has more links
§In-coming links? Out-going links?
• Intuition
§www.stanford.edu has 23,400 in-links
§www.joe-schmoe.com has one in-link
• Are all in-links are equal?
§Links from important pages count more
§Recursive question!
12
Prof. Pier Luca Lanzi
B
38.4
C
34.3
E
8.1
F
3.9
D
3.9
A
3.3
1.6
1.6 1.6 1.6 1.6
Prof. Pier Luca Lanzi
Simple Recursive Formulation
• Each link’s vote is proportional
to the importance of its source page
• If page j with importance rj has n
out-links, each link gets rj/n votes
• Page j’s own importance is
the sum of the votes on its in-links
14
j
ki
rj/3
rj/3rj/3
rj = ri/3+rk/4
ri/3 rk/4
Prof. Pier Luca Lanzi
The “Flow” Model
• A “vote” from an important page is worth more
• A page is important if it is pointed to by other important pages
• Define a “rank” rj for page j
where di is the out-degree of node i
• “Flow” equations
ry = ry /2 + ra /2
ra = ry /2 + rm
rm = ra /2
15
å®
=
ji
i
j
r
r
id y
ma
a/2
y/2
a/2
m
y/2
Prof. Pier Luca Lanzi
Solving the Flow Equations
• The equations, three unknowns variables, no constant
§No unique solution
§All solutions equivalent modulo the scale factor
• An additional constraint (ry+ra+rm=1) forces uniqueness
• Gaussian elimination method works for small examples, but we
need a better method for large web-size graphs
16
We need a different formulation that scales up!
Prof. Pier Luca Lanzi
The Matrix Formulation
• Represent the graph as a transition matrix M
§Suppose page i has di out-links
§If page i is linked to page j Mji is set to 1/di else Mji=0
§M is a “column stochastic matrix” since
the columns sum up to 1
• Given the rank vector r with an entry per page, where ri is the
importance of page i and the ri sum up to one
17
å®
=
ji
i
j
r
r
id
The flow equation can be written as r = Mr
Prof. Pier Luca Lanzi
The Eigenvector Formulation
• Since the flow equation can be written as r = Mr, the rank vector
r is also an eigenvector of M
• Thus, we can solve for r using a simple iterative scheme
(“power iteration”)
• Power iteration: a simple iterative scheme
§Suppose there are N web pages
§Initialize: r(0) = [1/N,….,1/N]T
§Iterate: r(t+1) = M · r(t)
§Stop when |r(t+1) – r(t)|1 < e
18
Prof. Pier Luca Lanzi
The Random Walk Formulation
• Suppose that a random surfer that at time t is on page i and will
continue it navigation by following one of the out-link at random
• At time t+1, will end up on page j and from there it will continue
the random surfing indefinitely
• Let p(t) the vector of probabilities pi(t) that the surfer is on
page i at time t (p(t) is the probability distribution over pages)
• Then, p(t+1) = Mp(t) so that
19
p(t) is the stationary distribution for the random walk
Prof. Pier Luca Lanzi
Existence and Uniqueness
For graphs that satisfy certain conditions,
the stationary distribution is unique and
eventually will be reached no matter
what the initial probability distribution is
Prof. Pier Luca Lanzi
Hubs and Authorities (HITS)
Prof. Pier Luca Lanzi
Hubs and Authorities
• HITS (Hypertext-Induced Topic Selection)
§Is a measure of importance of pages and
documents, similar to PageRank
§Proposed at around same time as PageRank (1998)
• Goal: Say we want to find good newspapers
§Don’t just find newspapers. Find “experts”, that is,
people who link in a coordinated way to good newspapers
• The idea is similar, links are viewed as votes
§Page is more important if it has more links
§In-coming links? Out-going links?
22
Prof. Pier Luca Lanzi
Hubs and Authorities
• Each page has 2 scores
• Quality as an expert (hub)
§Total sum of votes of authorities pointed to
• Quality as a content (authority)
§Total sum of votes coming from experts
• Principle of repeated improvement
23
Prof. Pier Luca Lanzi
Hubs and Authorities
• Authorities are pages containing
useful information
§Newspaper home pages
§Course home pages
§Home pages of auto manufacturers
• Hubs are pages that link to authorities
§List of newspapers
§Course bulletin
§List of US auto manufacturers
24
Prof. Pier Luca Lanzi
Counting in-links: Authority 25
(Note this is idealized example. In reality graph is not bipartite and each page
has both the hub and authority score)
Each page starts with hub score
1. Authorities collect their
votes
Prof. Pier Luca Lanzi
Counting in-links: Authority
26
Sum of hub
scores of nodes
pointing to NYT.
Each page starts with hub score
1. Authorities collect their
votes
26
Prof. Pier Luca Lanzi
Expert Quality: Hub
27
Hubs collect authority scores
Sum of authority
scores of nodes that
the node points to.
27
Prof. Pier Luca Lanzi
Reweighting
28
Authorities again collect
the hub scores
28
Prof. Pier Luca Lanzi
Mutually Recursive Definition
• A good hub links to many good authorities
• A good authority is linked from many good hubs
• Model using two scores for each node:
§Hub score and Authority score
§Represented as vectors and
29
29
Prof. Pier Luca Lanzi
The HITS Algorithm
• Initialize scores
• Iterate until convergence:
§Update authority scores
§Update hub scores
§Normalize
• Two vectors a = (a1, …, an) and h=(h1, …, hn) and
the adjacency matrix A, with Aij=1 is 1 if i connects to j
are connected, 0 otherwise
30
•
•
•
•
Prof. Pier Luca Lanzi
The HITS Algorithm (vector notation)
• Set ai = hi = 1/√n
• Repeat until convergence
§h = Aa
§a = ATh
• Convergence criteria
• Under reasonable assumptions about A, HITS converges to
vectors h* and a* where
§h* is the principal eigenvector of matrix A AT
§a* is the principal eigenvector of matrix AT A
31
Prof. Pier Luca Lanzi
PageRank vs HITS
• PageRank and HITS are two solutions to the same problem
§What is the value of an in-link from u to v?
§In the PageRank model, the value of the link
depends on the links into u
§In the HITS model, it depends on
the value of the other links out of u
• The destinies of PageRank and HITS
after 1998 were very different
32
Prof. Pier Luca Lanzi
Community Detection
33
Prof. Pier Luca Lanzi
We often think of networks being organized into modules, cluster, communities:
Prof. Pier Luca Lanzi
The goal is typically to find densely linked clusters
Prof. Pier Luca Lanzi
advertiser
query
Micro-Markets in Sponsored Search: find micro-markets by partitioning the
query-to-advertiser graph (Andersen, Lang: Communities from seed sets, 2006)
Prof. Pier Luca Lanzi
Clusters in Movies-to-Actors graph
(Andersen, Lang: Communities from seed sets, 2006)
Prof. Pier Luca Lanzi
Discovering social circles, circles of trust
(McAuley, Leskovec: Discovering social circles in ego networks, 2012)
Prof. Pier Luca Lanzi
how can we identify communities?
Prof. Pier Luca Lanzi
Girvan-Newman Method
• Define edge betweenness as the number of shortest paths
passing over the edge
• Divisive hierarchical clustering based on the notion of edge
betweenness
• The Algorithm
§Start with an undirected graph
§Repeat until no edges are left
Calculate betweenness of edges
Remove edges with highest betweenness
• Connected components are communities
• Gives a hierarchical decomposition of the network
40
Prof. Pier Luca Lanzi
Need to re-compute
betweenness at every step
49
33
121
Prof. Pier Luca Lanzi
Step 1: Step 2:
Step 3: Hierarchical network
decomposition
Prof. Pier Luca Lanzi
Communities in physics collaborations
Prof. Pier Luca Lanzi
how to select the number of clusters?
Prof. Pier Luca Lanzi
Network Communities
• Communities are viewed as sets of tightly connected nodes
• We define modularity as a measure
of how well a network is partitioned
into communities
• Given a partitioning of the network
into a set of groups S we define the
modularity Q as
45
Need a null model!
Prof. Pier Luca Lanzi
Modularity is useful for selecting the number of clusters:
Q
Prof. Pier Luca Lanzi
Example
Prof. Pier Luca Lanzi
Example #1 – Compute betweenness
# First we load the ipgrah package
library(igraph)
# let's generate two networks and merge them into one graph.
g2 <- barabasi.game(50, p=2, directed=F)
g1 <- watts.strogatz.game(1, size=100, nei=5, p=0.05)
g <- graph.union(g1,g2)
# let's remove multi-edges and loops
g <- simplify(g)
plot(g)
# compute betweenness
ebc <- edge.betweenness.community(g, directed=F)
48
Prof. Pier Luca Lanzi
Prof. Pier Luca Lanzi
Example #1 – Build Dendrogram and
Compute Modularity
mods <- sapply(0:ecount(g), function(i){
g2 <- delete.edges(g, ebc$removed.edges[seq(length=i)])
cl <- clusters(g2)$membership
# March 13, 2014 - compute modularity on the original graph g
# (Thank you to Augustin Luna for detecting this typo) and not on the
induced one g2.
modularity(g,cl)
})
# we can now plot all modularities
plot(mods, pch=20)
50
Prof. Pier Luca Lanzi
51
Prof. Pier Luca Lanzi
Example #1 – Select the cut
# Now, let's color the nodes according to their membership
g2<-delete.edges(g, ebc$removed.edges[seq(length=which.max(mods)-1)])
V(g)$color=clusters(g2)$membership
# Let's choose a layout for the graph
g$layout <- layout.fruchterman.reingold
# plot it
plot(g, vertex.label=NA)
52
Prof. Pier Luca Lanzi
Prof. Pier Luca Lanzi
Spectral Clustering
Prof. Pier Luca Lanzi
What Makes a Good Cluster?
• Undirected graph G(V,E)
• Partitioning task
§Divide the vertices into two disjoint
groups A, B=VA
• Questions
§How can we define a “good partition” of G?
§How can we efficiently identify such a partition?
55
1
3
2
5
4
6
1
3
2
5
4
6
A B
Prof. Pier Luca Lanzi
1
3
2
5
4
6
What makes a good partition?
Maximize the number of within-group connections
Minimize the number of between-group connections
Prof. Pier Luca Lanzi
Graph Cuts
• Express partitioning objectives as a function of
the “edge cut” of the partition
• Cut is defined as the set of edges with only one vertex in a group
• The cut of the set A, B is cut(A,B) = 2 or in more general
57
1
3
2
5
4
6
A B
Prof. Pier Luca Lanzi
Graph Cut Criterion
• Partition quality
§Minimize weight of connections between groups,
i.e., arg minA,B cut(A,B)
• Degenerate case:
• Problems
§Only considers external cluster connections
§Does not consider internal cluster connectivity
58
“Optimal cut”
Minimum cut
Prof. Pier Luca Lanzi
Graph Partitioning Criteria:
Normalized cut (Conductance)
• Connectivity of the group to the rest of the network should be
relative to the density of the group
• Where vol(A) is the total weight of the edges that have at least
one endpoint in A
59
Prof. Pier Luca Lanzi
Prof. Pier Luca Lanzi
Prof. Pier Luca Lanzi
Spectral Graph Partitioning
• Let A be the adjacent matrix of the graph G with n nodes
§Aij is 1 if there is an edge between i and j, 0 otherwise
§x a vector of n components (x1, …, xn) that represents
labels/values assigned to each node of G
§Ax returns a vector in which each component j is the sum of
the labels of the neighbors of node j
• Spectral Graph Theory
§Analyze the spectrum of G, that is, the eigenvectors xi of the
graph corresponding to the eigenvalues Λ of G sorted in
increasing order
§Λ = { λ1, …, λn} such that λ1≤λ2 ≤… ≤λn
62
Prof. Pier Luca Lanzi
Example: d-regular Graph
• Suppose that all the nodes in G have degree d and G is
connected
• What are the eigenvalues/eigenvectors of G? Ax=λx
§Ax returns the sum of the labels of each node’s neighbors and
since each node has exactly d neighbors, x = (1, …, 1) is an
eigenvector and d is an eigenvalue
• What if G is not connected but still d-regular
• A vector with all the ones is A and all the zeros in B (or
viceversa) is still an eigenvector of A with eigenvalue d
63
A B
Prof. Pier Luca Lanzi
Example: d-regular Graph (not
connected)
• What if G has two separate components
but it is still d-regular
• A vector with all the ones is A and all the
zeros in B (or viceversa) is still an eigenvector
of A with eigenvalue d
• Underlying intuition
64
A B
A B A B
λ1=λ2
λ1≈λ2
Prof. Pier Luca Lanzi
Spectral Graph Partitioning
• Adjacency matrix A (nxn)
§Symmetric
§Real and orthogonal
eigenvectors
• Degree Matrix
§nxn diagonal matrix
§Dii = degree of node i
65
1
3
2
5
4
6
1 2 3 4 5 6
1 0 1 1 0 1 0
2 1 0 1 0 0 0
3 1 1 0 1 0 0
4 0 0 1 0 1 1
5 1 0 0 1 0 1
6 0 0 0 1 1 0
1 2 3 4 5 6
1 3 0 0 0 0 0
2 0 2 0 0 0 0
3 0 0 3 0 0 0
4 0 0 0 3 0 0
5 0 0 0 0 3 0
6 0 0 0 0 0 2
Prof. Pier Luca Lanzi
Graph Laplacian Matrix
• Computed as L = D-A
§nxn symmetric matrix
§x=(1,…,1) is a trivial eigenvector since Lx=0 so λ1=0
• Important properties of L
§Eigenvalues are non-negative
real numbers
§Eigenvectors are real
and orthogonal
66
1 2 3 4 5 6
1 3 -1 -1 0 -1 0
2 -1 2 -1 0 0 0
3 -1 -1 3 -1 0 0
4 0 0 -1 3 -1 -1
5 -1 0 0 -1 3 -1
6 0 0 0 -1 -1 2
Prof. Pier Luca Lanzi
λ2 as optimization problem
• For symmetric matrix M,
• What is the meaning of xTLx on G? We can show that,
• So that, considering that the second eigenvector x is the unit
vector, and x is orthogonal to the unit vector (1, …, 1)
67
Prof. Pier Luca Lanzi
λ2 as optimization problem
• So that, considering that the second eigenvector x is the unit
vector, and x is orthogonal to the unit vector (1, …, 1)
• Such that,
68
Prof. Pier Luca Lanzi
0 x
λ2 and its eigenvector x balance to minimize
xi xj
Prof. Pier Luca Lanzi
Finding the Optimal Cut
• Express the partition (A,B) as a vector y where,
§yi = +1 if node i belongs to A
§yi = -1 if node i belongs to B
• We can minimize the cut of the partition by finding a non-trivial
vector that minimizes
70
Can’t solve exactly! Let’s relax y and
allow y to take any real value.
Prof. Pier Luca Lanzi
Rayleigh Theorem
• We know that,
• The minimum value of f(y) is given by the second smallest
eigenvalue λ2 of the Laplacian matrix L
• Thus, the optimal solution for y is given by the corresponding
eigenvector x, referred as the Fiedler vector
71
Prof. Pier Luca Lanzi
Spectral Clustering Algorithms
1. Pre-processing
§Construct a matrix representation of the graph
2. Decomposition
§Compute eigenvalues and eigenvectors of the matrix
§Map each point to a lower-dimensional representation based
on one or more eigenvectors
3. Grouping
§Assign points to two or more clusters, based on the new
representation
72
Prof. Pier Luca Lanzi
Spectral Partitioning Algorithm
• Pre-processing:
§Build Laplacian
matrix L of the
graph
• Decomposition:
§Find eigenvalues l
and eigenvectors x
of the matrix L
§Map vertices to
corresponding
components of l2
73
0.0-0.4-0.40.4-0.60.4
0.50.4-0.2-0.5-0.30.4
-0.50.40.60.1-0.30.4
0.5-0.40.60.10.30.4
0.00.4-0.40.40.60.4
-0.5-0.4-0.2-0.50.30.4
5.0
4.0
3.0
3.0
1.0
0.0
l= X =
-0.66
-0.35
-0.34
0.33
0.62
0.31
1 2 3 4 5 6
1 3 -1 -1 0 -1 0
2 -1 2 -1 0 0 0
3 -1 -1 3 -1 0 0
4 0 0 -1 3 -1 -1
5 -1 0 0 -1 3 -1
6 0 0 0 -1 -1 2
Prof. Pier Luca Lanzi
Spectral Partitioning Algorithm
• Grouping:
§Sort components of reduced 1-dimensional vector
§Identify clusters by splitting the sorted vector in two
• How to choose a splitting point?
§Naïve approaches: split at 0 or median value
§More expensive approaches: Attempt to minimize normalized
cut in 1-dimension (sweep over ordering of nodes induced by
the eigenvector)
74
74
-0.66
-0.35
-0.34
0.33
0.62
0.31
Split at 0:
Cluster A: Positive points
Cluster B: Negative points
0.33
0.62
0.31
-0.66
-0.35
-0.34
A B
Prof. Pier Luca Lanzi
Example: Spectral Partitioning 75
Rank in x2
Valueofx2
Prof. Pier Luca Lanzi
Example: Spectral Partitioning
76
Rank in x2
Valueofx2
Components of x2
76
Prof. Pier Luca Lanzi
Example: Spectral partitioning
77
Components of x1
Components of x3
77
Prof. Pier Luca Lanzi
Example using R
Prof. Pier Luca Lanzi
Example #1
ComputeDegreeMatrix <- function(v) {
n <- max(dim(v))
m <- diag(n)
d = v %*% rep(1,n)
for(i in 1:n ) {
m[i,i] = d[i,1]
}
return(m)
}
ComputeLaplacianMatrix <- function(v) {
D = ComputeDegreeMatrix(v)
L = D-v
return(L)
}
79
Prof. Pier Luca Lanzi
Example #1
#### 2 ----6
#### /  |
#### 1 4 |
####  /  |
#### 3 5
G = rbind(
c(0, 1, 1, 0, 0, 0),
c(1, 0, 0, 1, 0, 1),
c(1, 0, 0, 1, 0, 0),
c(0, 1, 1, 0, 1, 0),
c(0, 0, 0, 1, 0, 1),
c(0, 1, 0, 0, 1, 0)
)
L = ComputeLaplacianMatrix(G)
E = eigen(L)
second_eigen_value = (E$value)[max(dim(L))-1]
second_eigen_vector = (E$vectors)[,max(dim(L))-1]
5.000000e-01 1.165734e-15 5.000000e-01 4.718448e-16 -5.000000e-01 -
5.000000e-01
80
Prof. Pier Luca Lanzi
Example #2
G = rbind(
c(0, 0, 1, 0, 0, 1, 0, 0),
c(0, 0, 0, 0, 1, 0, 0, 1),
c(1, 0, 0, 1, 0, 1, 0, 0),
c(0, 0, 1, 0, 1, 0, 1, 0),
c(0, 1, 0, 1, 0, 0, 0, 1),
c(1, 0, 1, 0, 0, 0, 1, 0),
c(0, 0, 0, 1, 0, 1, 0, 1),
c(0, 1, 0, 0, 1, 0, 1, 0)
)
L = ComputeLaplacianMatrix(G)
E = eigen(L)
second_eigen_value = (E$value)[max(dim(L))-1]
second_eigen_vector = (E$vectors)[,max(dim(L))-1]
5.000000e-01 -5.000000e-01 3.535534e-01 -3.885781e-16 -3.535534e-01
3.535534e-01 -4.996004e-16 -3.535534e-01
81
Prof. Pier Luca Lanzi
How Do We Partition a Graph into k
Clusters?
• Two basic approaches:
• Recursive bi-partitioning [Hagen et al., ’92]
§Recursively apply bi-partitioning algorithm
in a hierarchical divisive manner
§Disadvantages: inefficient, unstable
• Cluster multiple eigenvectors [Shi-Malik, ’00]
§Build a reduced space from multiple eigenvectors
§Commonly used in recent papers
§A preferable approach…
82
82
Prof. Pier Luca Lanzi
Why Use Multiple Eigenvectors?
• Approximates the optimal cut [Shi-Malik, ’00]
§Can be used to approximate optimal k-way normalized cut
• Emphasizes cohesive clusters
§Increases the unevenness in the distribution of the data
§Associations between similar points are amplified, associations
between dissimilar points are attenuated
§The data begins to “approximate a clustering”
• Well-separated space
§Transforms data to a new “embedded space”,
consisting of k orthogonal basis vectors
• Multiple eigenvectors prevent instability due to information loss
83
Prof. Pier Luca Lanzi
Searching for Small Communities
(Trawling)
Prof. Pier Luca Lanzi
Searching for small communities in
the Web graph (Trawling)
• Trawling
§ What is the signature of a community in a Web graph?
§ The underlying intuition, that small communities involve
many people talking about the same things
§ Use this to define “topics”: what the same people on
the left talk about on the right?
• More formally
§ Enumerate complete bipartite subgraphs Ks,t
§ Ks,t has s nodes on the “left” and t nodes on the “right”
§ The left nodes link to the same node of on the right,
forming a fully connected bipartite graph
85
[Kumar et al. ‘99]
Dense 2-layer
…
…
K3,4
X Y
Prof. Pier Luca Lanzi
Mining Bipartite Ks,t using Frequent
Itemsets
• Searching for such complete bipartite graphs can be viewed as a
frequent itemset mining problem
• View each node i as a
set Si of nodes i points to
• Ks,t = a set Y of size t
that occurs in s sets Si
• Looking for Ks,t is equivalento to
settting the frequency threshold
to s and look at layer t
(i.e., all frequent sets of size t)
86
[Kumar et al. ‘99]
i
b
c
d
a
Si={a,b,c,d}
j
i
k
b
c
d
a
X Y
s = minimum support (|X|=s)
t = itemset size (|Y|=t)
Prof. Pier Luca Lanzi
i
b
c
d
a
Si={a,b,c,d}
x
y
z
b
c
a
X Y
Find frequent itemsets:
s … minimum support
t … itemset size
We found Ks,t!
Ks,t = a set Y of size t
that occurs in s sets Si
View each node i as a
set Si of nodes i points to
Say we find a frequent
itemset Y={a,b,c} of supp s;
so, there are s nodes that link to
all of {a,b,c}:
x
b
c
a
z
a
b
c
y
b
c
a
Prof. Pier Luca Lanzi
Example
• Support threshold s=2
§{b,d}: support 3
§{e,f}: support 2
• And we just found 2 bipartite subgraphs:
88
c
a b
d
f
e
c
a b
d
e
c
d
f
e
• Itemsets
§ a = {b,c,d}
§ b = {d}
§ c = {b,d,e,f}
§ d = {e,f}
§ e = {b,d}
§ f = {}
Prof. Pier Luca Lanzi
Run the Python notebooks
for this lecture

More Related Content

What's hot

DMTM Lecture 05 Data representation
DMTM Lecture 05 Data representationDMTM Lecture 05 Data representation
DMTM Lecture 05 Data representationPier Luca Lanzi
 
DMTM Lecture 09 Other classificationmethods
DMTM Lecture 09 Other classificationmethodsDMTM Lecture 09 Other classificationmethods
DMTM Lecture 09 Other classificationmethodsPier Luca Lanzi
 
DMTM Lecture 11 Clustering
DMTM Lecture 11 ClusteringDMTM Lecture 11 Clustering
DMTM Lecture 11 ClusteringPier Luca Lanzi
 
DMTM 2015 - 10 Introduction to Classification
DMTM 2015 - 10 Introduction to ClassificationDMTM 2015 - 10 Introduction to Classification
DMTM 2015 - 10 Introduction to ClassificationPier Luca Lanzi
 
DMTM Lecture 10 Classification ensembles
DMTM Lecture 10 Classification ensemblesDMTM Lecture 10 Classification ensembles
DMTM Lecture 10 Classification ensemblesPier Luca Lanzi
 
DMTM Lecture 04 Classification
DMTM Lecture 04 ClassificationDMTM Lecture 04 Classification
DMTM Lecture 04 ClassificationPier Luca Lanzi
 
DMTM 2015 - 06 Introduction to Clustering
DMTM 2015 - 06 Introduction to ClusteringDMTM 2015 - 06 Introduction to Clustering
DMTM 2015 - 06 Introduction to ClusteringPier Luca Lanzi
 
DMTM 2015 - 17 Text Mining Part 1
DMTM 2015 - 17 Text Mining Part 1DMTM 2015 - 17 Text Mining Part 1
DMTM 2015 - 17 Text Mining Part 1Pier Luca Lanzi
 
DMTM Lecture 06 Classification evaluation
DMTM Lecture 06 Classification evaluationDMTM Lecture 06 Classification evaluation
DMTM Lecture 06 Classification evaluationPier Luca Lanzi
 
DMTM Lecture 13 Representative based clustering
DMTM Lecture 13 Representative based clusteringDMTM Lecture 13 Representative based clustering
DMTM Lecture 13 Representative based clusteringPier Luca Lanzi
 
DMTM Lecture 01 Introduction
DMTM Lecture 01 IntroductionDMTM Lecture 01 Introduction
DMTM Lecture 01 IntroductionPier Luca Lanzi
 
DMTM 2015 - 07 Hierarchical Clustering
DMTM 2015 - 07 Hierarchical ClusteringDMTM 2015 - 07 Hierarchical Clustering
DMTM 2015 - 07 Hierarchical ClusteringPier Luca Lanzi
 
DMTM 2015 - 15 Classification Ensembles
DMTM 2015 - 15 Classification EnsemblesDMTM 2015 - 15 Classification Ensembles
DMTM 2015 - 15 Classification EnsemblesPier Luca Lanzi
 
The Web of Data: do we actually understand what we built?
The Web of Data: do we actually understand what we built?The Web of Data: do we actually understand what we built?
The Web of Data: do we actually understand what we built?Frank van Harmelen
 
DMTM Lecture 08 Classification rules
DMTM Lecture 08 Classification rulesDMTM Lecture 08 Classification rules
DMTM Lecture 08 Classification rulesPier Luca Lanzi
 
DMTM 2015 - 14 Evaluation of Classification Models
DMTM 2015 - 14 Evaluation of Classification ModelsDMTM 2015 - 14 Evaluation of Classification Models
DMTM 2015 - 14 Evaluation of Classification ModelsPier Luca Lanzi
 
DMTM 2015 - 08 Representative-Based Clustering
DMTM 2015 - 08 Representative-Based ClusteringDMTM 2015 - 08 Representative-Based Clustering
DMTM 2015 - 08 Representative-Based ClusteringPier Luca Lanzi
 

What's hot (17)

DMTM Lecture 05 Data representation
DMTM Lecture 05 Data representationDMTM Lecture 05 Data representation
DMTM Lecture 05 Data representation
 
DMTM Lecture 09 Other classificationmethods
DMTM Lecture 09 Other classificationmethodsDMTM Lecture 09 Other classificationmethods
DMTM Lecture 09 Other classificationmethods
 
DMTM Lecture 11 Clustering
DMTM Lecture 11 ClusteringDMTM Lecture 11 Clustering
DMTM Lecture 11 Clustering
 
DMTM 2015 - 10 Introduction to Classification
DMTM 2015 - 10 Introduction to ClassificationDMTM 2015 - 10 Introduction to Classification
DMTM 2015 - 10 Introduction to Classification
 
DMTM Lecture 10 Classification ensembles
DMTM Lecture 10 Classification ensemblesDMTM Lecture 10 Classification ensembles
DMTM Lecture 10 Classification ensembles
 
DMTM Lecture 04 Classification
DMTM Lecture 04 ClassificationDMTM Lecture 04 Classification
DMTM Lecture 04 Classification
 
DMTM 2015 - 06 Introduction to Clustering
DMTM 2015 - 06 Introduction to ClusteringDMTM 2015 - 06 Introduction to Clustering
DMTM 2015 - 06 Introduction to Clustering
 
DMTM 2015 - 17 Text Mining Part 1
DMTM 2015 - 17 Text Mining Part 1DMTM 2015 - 17 Text Mining Part 1
DMTM 2015 - 17 Text Mining Part 1
 
DMTM Lecture 06 Classification evaluation
DMTM Lecture 06 Classification evaluationDMTM Lecture 06 Classification evaluation
DMTM Lecture 06 Classification evaluation
 
DMTM Lecture 13 Representative based clustering
DMTM Lecture 13 Representative based clusteringDMTM Lecture 13 Representative based clustering
DMTM Lecture 13 Representative based clustering
 
DMTM Lecture 01 Introduction
DMTM Lecture 01 IntroductionDMTM Lecture 01 Introduction
DMTM Lecture 01 Introduction
 
DMTM 2015 - 07 Hierarchical Clustering
DMTM 2015 - 07 Hierarchical ClusteringDMTM 2015 - 07 Hierarchical Clustering
DMTM 2015 - 07 Hierarchical Clustering
 
DMTM 2015 - 15 Classification Ensembles
DMTM 2015 - 15 Classification EnsemblesDMTM 2015 - 15 Classification Ensembles
DMTM 2015 - 15 Classification Ensembles
 
The Web of Data: do we actually understand what we built?
The Web of Data: do we actually understand what we built?The Web of Data: do we actually understand what we built?
The Web of Data: do we actually understand what we built?
 
DMTM Lecture 08 Classification rules
DMTM Lecture 08 Classification rulesDMTM Lecture 08 Classification rules
DMTM Lecture 08 Classification rules
 
DMTM 2015 - 14 Evaluation of Classification Models
DMTM 2015 - 14 Evaluation of Classification ModelsDMTM 2015 - 14 Evaluation of Classification Models
DMTM 2015 - 14 Evaluation of Classification Models
 
DMTM 2015 - 08 Representative-Based Clustering
DMTM 2015 - 08 Representative-Based ClusteringDMTM 2015 - 08 Representative-Based Clustering
DMTM 2015 - 08 Representative-Based Clustering
 

Similar to DMTM Lecture 18 Graph mining

Gaining, retaining and losing influence in online communities
Gaining, retaining and losing influence in online communitiesGaining, retaining and losing influence in online communities
Gaining, retaining and losing influence in online communitiesjoinson
 
Page rank talk at NTU-EE
Page rank talk at NTU-EEPage rank talk at NTU-EE
Page rank talk at NTU-EEPing Yeh
 
4. social network analysis
4. social network analysis4. social network analysis
4. social network analysisLokesh Ramaswamy
 
4C13 J.15 Larson "Twitter based discourse community"
4C13 J.15 Larson "Twitter based discourse community"4C13 J.15 Larson "Twitter based discourse community"
4C13 J.15 Larson "Twitter based discourse community"rhetoricked
 
Predicting Communication Intention in Social Media
Predicting Communication Intention in Social MediaPredicting Communication Intention in Social Media
Predicting Communication Intention in Social MediaCharalampos Chelmis
 
Power Laws and Rich-Get-Richer Phenomena
Power Laws and Rich-Get-Richer PhenomenaPower Laws and Rich-Get-Richer Phenomena
Power Laws and Rich-Get-Richer PhenomenaAi Sha
 
DMTM 2015 - 18 Text Mining Part 2
DMTM 2015 - 18 Text Mining Part 2DMTM 2015 - 18 Text Mining Part 2
DMTM 2015 - 18 Text Mining Part 2Pier Luca Lanzi
 
DMTM 2015 - 16 Data Preparation
DMTM 2015 - 16 Data PreparationDMTM 2015 - 16 Data Preparation
DMTM 2015 - 16 Data PreparationPier Luca Lanzi
 
Socio semantic networks of research publications in learning analytics community
Socio semantic networks of research publications in learning analytics communitySocio semantic networks of research publications in learning analytics community
Socio semantic networks of research publications in learning analytics communitySoudé Fazeli
 
DMTM Lecture 02 Data mining
DMTM Lecture 02 Data miningDMTM Lecture 02 Data mining
DMTM Lecture 02 Data miningPier Luca Lanzi
 
Knowledge base system appl. p 1,2-ver1
Knowledge base system appl.  p 1,2-ver1Knowledge base system appl.  p 1,2-ver1
Knowledge base system appl. p 1,2-ver1Taymoor Nazmy
 
Tutorial 8 (web graph models)
Tutorial 8 (web graph models)Tutorial 8 (web graph models)
Tutorial 8 (web graph models)Kira
 
finding nobel prize window by PageRank
finding nobel prize window by PageRankfinding nobel prize window by PageRank
finding nobel prize window by PageRankYuji Fujita
 
2013 CrossRef Annual Meeting Agile Publishing Kristen Ratan
2013 CrossRef Annual Meeting Agile Publishing Kristen Ratan2013 CrossRef Annual Meeting Agile Publishing Kristen Ratan
2013 CrossRef Annual Meeting Agile Publishing Kristen RatanCrossref
 
Krichel·A Collaboration Graph for E-LIS
Krichel·A Collaboration Graph for E-LISKrichel·A Collaboration Graph for E-LIS
Krichel·A Collaboration Graph for E-LISLIS EPI Meeting
 
Link analysis for web search
Link analysis for web searchLink analysis for web search
Link analysis for web searchEmrullah Delibas
 

Similar to DMTM Lecture 18 Graph mining (20)

Gaining, retaining and losing influence in online communities
Gaining, retaining and losing influence in online communitiesGaining, retaining and losing influence in online communities
Gaining, retaining and losing influence in online communities
 
Social (1)
Social (1)Social (1)
Social (1)
 
Page rank talk at NTU-EE
Page rank talk at NTU-EEPage rank talk at NTU-EE
Page rank talk at NTU-EE
 
4. social network analysis
4. social network analysis4. social network analysis
4. social network analysis
 
Content-based link prediction
Content-based link predictionContent-based link prediction
Content-based link prediction
 
Link-Based Ranking
Link-Based RankingLink-Based Ranking
Link-Based Ranking
 
4C13 J.15 Larson "Twitter based discourse community"
4C13 J.15 Larson "Twitter based discourse community"4C13 J.15 Larson "Twitter based discourse community"
4C13 J.15 Larson "Twitter based discourse community"
 
Predicting Communication Intention in Social Media
Predicting Communication Intention in Social MediaPredicting Communication Intention in Social Media
Predicting Communication Intention in Social Media
 
Power Laws and Rich-Get-Richer Phenomena
Power Laws and Rich-Get-Richer PhenomenaPower Laws and Rich-Get-Richer Phenomena
Power Laws and Rich-Get-Richer Phenomena
 
DMTM 2015 - 18 Text Mining Part 2
DMTM 2015 - 18 Text Mining Part 2DMTM 2015 - 18 Text Mining Part 2
DMTM 2015 - 18 Text Mining Part 2
 
DMTM 2015 - 16 Data Preparation
DMTM 2015 - 16 Data PreparationDMTM 2015 - 16 Data Preparation
DMTM 2015 - 16 Data Preparation
 
Socio semantic networks of research publications in learning analytics community
Socio semantic networks of research publications in learning analytics communitySocio semantic networks of research publications in learning analytics community
Socio semantic networks of research publications in learning analytics community
 
DMTM Lecture 02 Data mining
DMTM Lecture 02 Data miningDMTM Lecture 02 Data mining
DMTM Lecture 02 Data mining
 
Knowledge base system appl. p 1,2-ver1
Knowledge base system appl.  p 1,2-ver1Knowledge base system appl.  p 1,2-ver1
Knowledge base system appl. p 1,2-ver1
 
Tutorial 8 (web graph models)
Tutorial 8 (web graph models)Tutorial 8 (web graph models)
Tutorial 8 (web graph models)
 
finding nobel prize window by PageRank
finding nobel prize window by PageRankfinding nobel prize window by PageRank
finding nobel prize window by PageRank
 
2013 CrossRef Annual Meeting Agile Publishing Kristen Ratan
2013 CrossRef Annual Meeting Agile Publishing Kristen Ratan2013 CrossRef Annual Meeting Agile Publishing Kristen Ratan
2013 CrossRef Annual Meeting Agile Publishing Kristen Ratan
 
JPSPstructure2015
JPSPstructure2015JPSPstructure2015
JPSPstructure2015
 
Krichel·A Collaboration Graph for E-LIS
Krichel·A Collaboration Graph for E-LISKrichel·A Collaboration Graph for E-LIS
Krichel·A Collaboration Graph for E-LIS
 
Link analysis for web search
Link analysis for web searchLink analysis for web search
Link analysis for web search
 

More from Pier Luca Lanzi

11 Settembre 2021 - Giocare con i Videogiochi
11 Settembre 2021 - Giocare con i Videogiochi11 Settembre 2021 - Giocare con i Videogiochi
11 Settembre 2021 - Giocare con i VideogiochiPier Luca Lanzi
 
Breve Viaggio al Centro dei Videogiochi
Breve Viaggio al Centro dei VideogiochiBreve Viaggio al Centro dei Videogiochi
Breve Viaggio al Centro dei VideogiochiPier Luca Lanzi
 
Global Game Jam 19 @ POLIMI - Morning Welcome
Global Game Jam 19 @ POLIMI - Morning WelcomeGlobal Game Jam 19 @ POLIMI - Morning Welcome
Global Game Jam 19 @ POLIMI - Morning WelcomePier Luca Lanzi
 
Data Driven Game Design @ Campus Party 2018
Data Driven Game Design @ Campus Party 2018Data Driven Game Design @ Campus Party 2018
Data Driven Game Design @ Campus Party 2018Pier Luca Lanzi
 
GGJ18 al Politecnico di Milano - Presentazione che precede la presentazione d...
GGJ18 al Politecnico di Milano - Presentazione che precede la presentazione d...GGJ18 al Politecnico di Milano - Presentazione che precede la presentazione d...
GGJ18 al Politecnico di Milano - Presentazione che precede la presentazione d...Pier Luca Lanzi
 
GGJ18 al Politecnico di Milano - Presentazione di apertura
GGJ18 al Politecnico di Milano - Presentazione di aperturaGGJ18 al Politecnico di Milano - Presentazione di apertura
GGJ18 al Politecnico di Milano - Presentazione di aperturaPier Luca Lanzi
 
Presentation for UNITECH event - January 8, 2018
Presentation for UNITECH event - January 8, 2018Presentation for UNITECH event - January 8, 2018
Presentation for UNITECH event - January 8, 2018Pier Luca Lanzi
 
DMTM Lecture 16 Association rules
DMTM Lecture 16 Association rulesDMTM Lecture 16 Association rules
DMTM Lecture 16 Association rulesPier Luca Lanzi
 
DMTM Lecture 14 Density based clustering
DMTM Lecture 14 Density based clusteringDMTM Lecture 14 Density based clustering
DMTM Lecture 14 Density based clusteringPier Luca Lanzi
 
DMTM Lecture 07 Decision trees
DMTM Lecture 07 Decision treesDMTM Lecture 07 Decision trees
DMTM Lecture 07 Decision treesPier Luca Lanzi
 
VDP2016 - Lecture 16 Rendering pipeline
VDP2016 - Lecture 16 Rendering pipelineVDP2016 - Lecture 16 Rendering pipeline
VDP2016 - Lecture 16 Rendering pipelinePier Luca Lanzi
 
VDP2016 - Lecture 15 PCG with Unity
VDP2016 - Lecture 15 PCG with UnityVDP2016 - Lecture 15 PCG with Unity
VDP2016 - Lecture 15 PCG with UnityPier Luca Lanzi
 
VDP2016 - Lecture 14 Procedural content generation
VDP2016 - Lecture 14 Procedural content generationVDP2016 - Lecture 14 Procedural content generation
VDP2016 - Lecture 14 Procedural content generationPier Luca Lanzi
 
VDP2016 - Lecture 13 Data driven game design
VDP2016 - Lecture 13 Data driven game designVDP2016 - Lecture 13 Data driven game design
VDP2016 - Lecture 13 Data driven game designPier Luca Lanzi
 

More from Pier Luca Lanzi (14)

11 Settembre 2021 - Giocare con i Videogiochi
11 Settembre 2021 - Giocare con i Videogiochi11 Settembre 2021 - Giocare con i Videogiochi
11 Settembre 2021 - Giocare con i Videogiochi
 
Breve Viaggio al Centro dei Videogiochi
Breve Viaggio al Centro dei VideogiochiBreve Viaggio al Centro dei Videogiochi
Breve Viaggio al Centro dei Videogiochi
 
Global Game Jam 19 @ POLIMI - Morning Welcome
Global Game Jam 19 @ POLIMI - Morning WelcomeGlobal Game Jam 19 @ POLIMI - Morning Welcome
Global Game Jam 19 @ POLIMI - Morning Welcome
 
Data Driven Game Design @ Campus Party 2018
Data Driven Game Design @ Campus Party 2018Data Driven Game Design @ Campus Party 2018
Data Driven Game Design @ Campus Party 2018
 
GGJ18 al Politecnico di Milano - Presentazione che precede la presentazione d...
GGJ18 al Politecnico di Milano - Presentazione che precede la presentazione d...GGJ18 al Politecnico di Milano - Presentazione che precede la presentazione d...
GGJ18 al Politecnico di Milano - Presentazione che precede la presentazione d...
 
GGJ18 al Politecnico di Milano - Presentazione di apertura
GGJ18 al Politecnico di Milano - Presentazione di aperturaGGJ18 al Politecnico di Milano - Presentazione di apertura
GGJ18 al Politecnico di Milano - Presentazione di apertura
 
Presentation for UNITECH event - January 8, 2018
Presentation for UNITECH event - January 8, 2018Presentation for UNITECH event - January 8, 2018
Presentation for UNITECH event - January 8, 2018
 
DMTM Lecture 16 Association rules
DMTM Lecture 16 Association rulesDMTM Lecture 16 Association rules
DMTM Lecture 16 Association rules
 
DMTM Lecture 14 Density based clustering
DMTM Lecture 14 Density based clusteringDMTM Lecture 14 Density based clustering
DMTM Lecture 14 Density based clustering
 
DMTM Lecture 07 Decision trees
DMTM Lecture 07 Decision treesDMTM Lecture 07 Decision trees
DMTM Lecture 07 Decision trees
 
VDP2016 - Lecture 16 Rendering pipeline
VDP2016 - Lecture 16 Rendering pipelineVDP2016 - Lecture 16 Rendering pipeline
VDP2016 - Lecture 16 Rendering pipeline
 
VDP2016 - Lecture 15 PCG with Unity
VDP2016 - Lecture 15 PCG with UnityVDP2016 - Lecture 15 PCG with Unity
VDP2016 - Lecture 15 PCG with Unity
 
VDP2016 - Lecture 14 Procedural content generation
VDP2016 - Lecture 14 Procedural content generationVDP2016 - Lecture 14 Procedural content generation
VDP2016 - Lecture 14 Procedural content generation
 
VDP2016 - Lecture 13 Data driven game design
VDP2016 - Lecture 13 Data driven game designVDP2016 - Lecture 13 Data driven game design
VDP2016 - Lecture 13 Data driven game design
 

Recently uploaded

Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactPECB
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingTechSoup
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Sapana Sha
 
social pharmacy d-pharm 1st year by Pragati K. Mahajan
social pharmacy d-pharm 1st year by Pragati K. Mahajansocial pharmacy d-pharm 1st year by Pragati K. Mahajan
social pharmacy d-pharm 1st year by Pragati K. Mahajanpragatimahajan3
 
Disha NEET Physics Guide for classes 11 and 12.pdf
Disha NEET Physics Guide for classes 11 and 12.pdfDisha NEET Physics Guide for classes 11 and 12.pdf
Disha NEET Physics Guide for classes 11 and 12.pdfchloefrazer622
 
IGNOU MSCCFT and PGDCFT Exam Question Pattern: MCFT003 Counselling and Family...
IGNOU MSCCFT and PGDCFT Exam Question Pattern: MCFT003 Counselling and Family...IGNOU MSCCFT and PGDCFT Exam Question Pattern: MCFT003 Counselling and Family...
IGNOU MSCCFT and PGDCFT Exam Question Pattern: MCFT003 Counselling and Family...PsychoTech Services
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfciinovamais
 
Sports & Fitness Value Added Course FY..
Sports & Fitness Value Added Course FY..Sports & Fitness Value Added Course FY..
Sports & Fitness Value Added Course FY..Disha Kariya
 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxheathfieldcps1
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)eniolaolutunde
 
The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13Steve Thomason
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeThiyagu K
 
Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3JemimahLaneBuaron
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introductionMaksud Ahmed
 
Web & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfWeb & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfJayanti Pande
 
Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17Celine George
 
Class 11th Physics NEET formula sheet pdf
Class 11th Physics NEET formula sheet pdfClass 11th Physics NEET formula sheet pdf
Class 11th Physics NEET formula sheet pdfAyushMahapatra5
 
Unit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptxUnit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptxVishalSingh1417
 
fourth grading exam for kindergarten in writing
fourth grading exam for kindergarten in writingfourth grading exam for kindergarten in writing
fourth grading exam for kindergarten in writingTeacherCyreneCayanan
 

Recently uploaded (20)

Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global Impact
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy Consulting
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
 
social pharmacy d-pharm 1st year by Pragati K. Mahajan
social pharmacy d-pharm 1st year by Pragati K. Mahajansocial pharmacy d-pharm 1st year by Pragati K. Mahajan
social pharmacy d-pharm 1st year by Pragati K. Mahajan
 
Disha NEET Physics Guide for classes 11 and 12.pdf
Disha NEET Physics Guide for classes 11 and 12.pdfDisha NEET Physics Guide for classes 11 and 12.pdf
Disha NEET Physics Guide for classes 11 and 12.pdf
 
IGNOU MSCCFT and PGDCFT Exam Question Pattern: MCFT003 Counselling and Family...
IGNOU MSCCFT and PGDCFT Exam Question Pattern: MCFT003 Counselling and Family...IGNOU MSCCFT and PGDCFT Exam Question Pattern: MCFT003 Counselling and Family...
IGNOU MSCCFT and PGDCFT Exam Question Pattern: MCFT003 Counselling and Family...
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdf
 
Sports & Fitness Value Added Course FY..
Sports & Fitness Value Added Course FY..Sports & Fitness Value Added Course FY..
Sports & Fitness Value Added Course FY..
 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptx
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)
 
The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and Mode
 
Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introduction
 
Web & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfWeb & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdf
 
Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17
 
Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"
Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"
Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"
 
Class 11th Physics NEET formula sheet pdf
Class 11th Physics NEET formula sheet pdfClass 11th Physics NEET formula sheet pdf
Class 11th Physics NEET formula sheet pdf
 
Unit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptxUnit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptx
 
fourth grading exam for kindergarten in writing
fourth grading exam for kindergarten in writingfourth grading exam for kindergarten in writing
fourth grading exam for kindergarten in writing
 

DMTM Lecture 18 Graph mining

  • 1. Prof. Pier Luca Lanzi Graph Mining Data Mining andText Mining (UIC 583 @ Politecnico di Milano)
  • 2. Prof. Pier Luca Lanzi References • Jure Leskovec, Anand Rajaraman, Jeff Ullman. Mining of Massive Datasets, Chapter 5 & Chapter 10 • Book and slides are available from http://www.mmds.org 2
  • 3. Prof. Pier Luca Lanzi Facebook social graph 4-degrees of separation [Backstrom-Boldi-Rosa-Ugander-Vigna, 2011]
  • 4. Prof. Pier Luca Lanzi Connections between political blogs Polarization of the network [Adamic-Glance, 2005]
  • 5. Prof. Pier Luca Lanzi Citation networks and Maps of science [Börner et al., 2012]
  • 6. Prof. Pier Luca Lanzi Web as a graph: pages are nodes, edges are links
  • 7. Prof. Pier Luca Lanzi How is the Web Organized? • Initial approaches §Human curated Web directories §Yahoo, DMOZ, LookSmart • Then, Web search §Information Retrieval investigates: Find relevant docs in a small and trusted set §Newspaper articles, Patents, etc. 8 Web is huge, full of untrusted documents, random things, web spam, etc.
  • 8. Prof. Pier Luca Lanzi Web Search Challenges • Web contains many sources of information §Who should we “trust”? §Trick: Trustworthy pages may point to each other! • What is the “best” answer to query “newspaper”? §No single right answer §Trick: Pages that actually know about newspapers might all be pointing to many newspapers 9
  • 9. Prof. Pier Luca Lanzi Page Rank 10
  • 10. Prof. Pier Luca Lanzi https://www.youtube.com/watch?v=wPMZr9RDmVk
  • 11. Prof. Pier Luca Lanzi Page Rank Algorithm • The underlying idea is to look at links as votes • A page is more important if it has more links §In-coming links? Out-going links? • Intuition §www.stanford.edu has 23,400 in-links §www.joe-schmoe.com has one in-link • Are all in-links are equal? §Links from important pages count more §Recursive question! 12
  • 12. Prof. Pier Luca Lanzi B 38.4 C 34.3 E 8.1 F 3.9 D 3.9 A 3.3 1.6 1.6 1.6 1.6 1.6
  • 13. Prof. Pier Luca Lanzi Simple Recursive Formulation • Each link’s vote is proportional to the importance of its source page • If page j with importance rj has n out-links, each link gets rj/n votes • Page j’s own importance is the sum of the votes on its in-links 14 j ki rj/3 rj/3rj/3 rj = ri/3+rk/4 ri/3 rk/4
  • 14. Prof. Pier Luca Lanzi The “Flow” Model • A “vote” from an important page is worth more • A page is important if it is pointed to by other important pages • Define a “rank” rj for page j where di is the out-degree of node i • “Flow” equations ry = ry /2 + ra /2 ra = ry /2 + rm rm = ra /2 15 å® = ji i j r r id y ma a/2 y/2 a/2 m y/2
  • 15. Prof. Pier Luca Lanzi Solving the Flow Equations • The equations, three unknowns variables, no constant §No unique solution §All solutions equivalent modulo the scale factor • An additional constraint (ry+ra+rm=1) forces uniqueness • Gaussian elimination method works for small examples, but we need a better method for large web-size graphs 16 We need a different formulation that scales up!
  • 16. Prof. Pier Luca Lanzi The Matrix Formulation • Represent the graph as a transition matrix M §Suppose page i has di out-links §If page i is linked to page j Mji is set to 1/di else Mji=0 §M is a “column stochastic matrix” since the columns sum up to 1 • Given the rank vector r with an entry per page, where ri is the importance of page i and the ri sum up to one 17 å® = ji i j r r id The flow equation can be written as r = Mr
  • 17. Prof. Pier Luca Lanzi The Eigenvector Formulation • Since the flow equation can be written as r = Mr, the rank vector r is also an eigenvector of M • Thus, we can solve for r using a simple iterative scheme (“power iteration”) • Power iteration: a simple iterative scheme §Suppose there are N web pages §Initialize: r(0) = [1/N,….,1/N]T §Iterate: r(t+1) = M · r(t) §Stop when |r(t+1) – r(t)|1 < e 18
  • 18. Prof. Pier Luca Lanzi The Random Walk Formulation • Suppose that a random surfer that at time t is on page i and will continue it navigation by following one of the out-link at random • At time t+1, will end up on page j and from there it will continue the random surfing indefinitely • Let p(t) the vector of probabilities pi(t) that the surfer is on page i at time t (p(t) is the probability distribution over pages) • Then, p(t+1) = Mp(t) so that 19 p(t) is the stationary distribution for the random walk
  • 19. Prof. Pier Luca Lanzi Existence and Uniqueness For graphs that satisfy certain conditions, the stationary distribution is unique and eventually will be reached no matter what the initial probability distribution is
  • 20. Prof. Pier Luca Lanzi Hubs and Authorities (HITS)
  • 21. Prof. Pier Luca Lanzi Hubs and Authorities • HITS (Hypertext-Induced Topic Selection) §Is a measure of importance of pages and documents, similar to PageRank §Proposed at around same time as PageRank (1998) • Goal: Say we want to find good newspapers §Don’t just find newspapers. Find “experts”, that is, people who link in a coordinated way to good newspapers • The idea is similar, links are viewed as votes §Page is more important if it has more links §In-coming links? Out-going links? 22
  • 22. Prof. Pier Luca Lanzi Hubs and Authorities • Each page has 2 scores • Quality as an expert (hub) §Total sum of votes of authorities pointed to • Quality as a content (authority) §Total sum of votes coming from experts • Principle of repeated improvement 23
  • 23. Prof. Pier Luca Lanzi Hubs and Authorities • Authorities are pages containing useful information §Newspaper home pages §Course home pages §Home pages of auto manufacturers • Hubs are pages that link to authorities §List of newspapers §Course bulletin §List of US auto manufacturers 24
  • 24. Prof. Pier Luca Lanzi Counting in-links: Authority 25 (Note this is idealized example. In reality graph is not bipartite and each page has both the hub and authority score) Each page starts with hub score 1. Authorities collect their votes
  • 25. Prof. Pier Luca Lanzi Counting in-links: Authority 26 Sum of hub scores of nodes pointing to NYT. Each page starts with hub score 1. Authorities collect their votes 26
  • 26. Prof. Pier Luca Lanzi Expert Quality: Hub 27 Hubs collect authority scores Sum of authority scores of nodes that the node points to. 27
  • 27. Prof. Pier Luca Lanzi Reweighting 28 Authorities again collect the hub scores 28
  • 28. Prof. Pier Luca Lanzi Mutually Recursive Definition • A good hub links to many good authorities • A good authority is linked from many good hubs • Model using two scores for each node: §Hub score and Authority score §Represented as vectors and 29 29
  • 29. Prof. Pier Luca Lanzi The HITS Algorithm • Initialize scores • Iterate until convergence: §Update authority scores §Update hub scores §Normalize • Two vectors a = (a1, …, an) and h=(h1, …, hn) and the adjacency matrix A, with Aij=1 is 1 if i connects to j are connected, 0 otherwise 30 • • • •
  • 30. Prof. Pier Luca Lanzi The HITS Algorithm (vector notation) • Set ai = hi = 1/√n • Repeat until convergence §h = Aa §a = ATh • Convergence criteria • Under reasonable assumptions about A, HITS converges to vectors h* and a* where §h* is the principal eigenvector of matrix A AT §a* is the principal eigenvector of matrix AT A 31
  • 31. Prof. Pier Luca Lanzi PageRank vs HITS • PageRank and HITS are two solutions to the same problem §What is the value of an in-link from u to v? §In the PageRank model, the value of the link depends on the links into u §In the HITS model, it depends on the value of the other links out of u • The destinies of PageRank and HITS after 1998 were very different 32
  • 32. Prof. Pier Luca Lanzi Community Detection 33
  • 33. Prof. Pier Luca Lanzi We often think of networks being organized into modules, cluster, communities:
  • 34. Prof. Pier Luca Lanzi The goal is typically to find densely linked clusters
  • 35. Prof. Pier Luca Lanzi advertiser query Micro-Markets in Sponsored Search: find micro-markets by partitioning the query-to-advertiser graph (Andersen, Lang: Communities from seed sets, 2006)
  • 36. Prof. Pier Luca Lanzi Clusters in Movies-to-Actors graph (Andersen, Lang: Communities from seed sets, 2006)
  • 37. Prof. Pier Luca Lanzi Discovering social circles, circles of trust (McAuley, Leskovec: Discovering social circles in ego networks, 2012)
  • 38. Prof. Pier Luca Lanzi how can we identify communities?
  • 39. Prof. Pier Luca Lanzi Girvan-Newman Method • Define edge betweenness as the number of shortest paths passing over the edge • Divisive hierarchical clustering based on the notion of edge betweenness • The Algorithm §Start with an undirected graph §Repeat until no edges are left Calculate betweenness of edges Remove edges with highest betweenness • Connected components are communities • Gives a hierarchical decomposition of the network 40
  • 40. Prof. Pier Luca Lanzi Need to re-compute betweenness at every step 49 33 121
  • 41. Prof. Pier Luca Lanzi Step 1: Step 2: Step 3: Hierarchical network decomposition
  • 42. Prof. Pier Luca Lanzi Communities in physics collaborations
  • 43. Prof. Pier Luca Lanzi how to select the number of clusters?
  • 44. Prof. Pier Luca Lanzi Network Communities • Communities are viewed as sets of tightly connected nodes • We define modularity as a measure of how well a network is partitioned into communities • Given a partitioning of the network into a set of groups S we define the modularity Q as 45 Need a null model!
  • 45. Prof. Pier Luca Lanzi Modularity is useful for selecting the number of clusters: Q
  • 46. Prof. Pier Luca Lanzi Example
  • 47. Prof. Pier Luca Lanzi Example #1 – Compute betweenness # First we load the ipgrah package library(igraph) # let's generate two networks and merge them into one graph. g2 <- barabasi.game(50, p=2, directed=F) g1 <- watts.strogatz.game(1, size=100, nei=5, p=0.05) g <- graph.union(g1,g2) # let's remove multi-edges and loops g <- simplify(g) plot(g) # compute betweenness ebc <- edge.betweenness.community(g, directed=F) 48
  • 49. Prof. Pier Luca Lanzi Example #1 – Build Dendrogram and Compute Modularity mods <- sapply(0:ecount(g), function(i){ g2 <- delete.edges(g, ebc$removed.edges[seq(length=i)]) cl <- clusters(g2)$membership # March 13, 2014 - compute modularity on the original graph g # (Thank you to Augustin Luna for detecting this typo) and not on the induced one g2. modularity(g,cl) }) # we can now plot all modularities plot(mods, pch=20) 50
  • 50. Prof. Pier Luca Lanzi 51
  • 51. Prof. Pier Luca Lanzi Example #1 – Select the cut # Now, let's color the nodes according to their membership g2<-delete.edges(g, ebc$removed.edges[seq(length=which.max(mods)-1)]) V(g)$color=clusters(g2)$membership # Let's choose a layout for the graph g$layout <- layout.fruchterman.reingold # plot it plot(g, vertex.label=NA) 52
  • 53. Prof. Pier Luca Lanzi Spectral Clustering
  • 54. Prof. Pier Luca Lanzi What Makes a Good Cluster? • Undirected graph G(V,E) • Partitioning task §Divide the vertices into two disjoint groups A, B=VA • Questions §How can we define a “good partition” of G? §How can we efficiently identify such a partition? 55 1 3 2 5 4 6 1 3 2 5 4 6 A B
  • 55. Prof. Pier Luca Lanzi 1 3 2 5 4 6 What makes a good partition? Maximize the number of within-group connections Minimize the number of between-group connections
  • 56. Prof. Pier Luca Lanzi Graph Cuts • Express partitioning objectives as a function of the “edge cut” of the partition • Cut is defined as the set of edges with only one vertex in a group • The cut of the set A, B is cut(A,B) = 2 or in more general 57 1 3 2 5 4 6 A B
  • 57. Prof. Pier Luca Lanzi Graph Cut Criterion • Partition quality §Minimize weight of connections between groups, i.e., arg minA,B cut(A,B) • Degenerate case: • Problems §Only considers external cluster connections §Does not consider internal cluster connectivity 58 “Optimal cut” Minimum cut
  • 58. Prof. Pier Luca Lanzi Graph Partitioning Criteria: Normalized cut (Conductance) • Connectivity of the group to the rest of the network should be relative to the density of the group • Where vol(A) is the total weight of the edges that have at least one endpoint in A 59
  • 61. Prof. Pier Luca Lanzi Spectral Graph Partitioning • Let A be the adjacent matrix of the graph G with n nodes §Aij is 1 if there is an edge between i and j, 0 otherwise §x a vector of n components (x1, …, xn) that represents labels/values assigned to each node of G §Ax returns a vector in which each component j is the sum of the labels of the neighbors of node j • Spectral Graph Theory §Analyze the spectrum of G, that is, the eigenvectors xi of the graph corresponding to the eigenvalues Λ of G sorted in increasing order §Λ = { λ1, …, λn} such that λ1≤λ2 ≤… ≤λn 62
  • 62. Prof. Pier Luca Lanzi Example: d-regular Graph • Suppose that all the nodes in G have degree d and G is connected • What are the eigenvalues/eigenvectors of G? Ax=λx §Ax returns the sum of the labels of each node’s neighbors and since each node has exactly d neighbors, x = (1, …, 1) is an eigenvector and d is an eigenvalue • What if G is not connected but still d-regular • A vector with all the ones is A and all the zeros in B (or viceversa) is still an eigenvector of A with eigenvalue d 63 A B
  • 63. Prof. Pier Luca Lanzi Example: d-regular Graph (not connected) • What if G has two separate components but it is still d-regular • A vector with all the ones is A and all the zeros in B (or viceversa) is still an eigenvector of A with eigenvalue d • Underlying intuition 64 A B A B A B λ1=λ2 λ1≈λ2
  • 64. Prof. Pier Luca Lanzi Spectral Graph Partitioning • Adjacency matrix A (nxn) §Symmetric §Real and orthogonal eigenvectors • Degree Matrix §nxn diagonal matrix §Dii = degree of node i 65 1 3 2 5 4 6 1 2 3 4 5 6 1 0 1 1 0 1 0 2 1 0 1 0 0 0 3 1 1 0 1 0 0 4 0 0 1 0 1 1 5 1 0 0 1 0 1 6 0 0 0 1 1 0 1 2 3 4 5 6 1 3 0 0 0 0 0 2 0 2 0 0 0 0 3 0 0 3 0 0 0 4 0 0 0 3 0 0 5 0 0 0 0 3 0 6 0 0 0 0 0 2
  • 65. Prof. Pier Luca Lanzi Graph Laplacian Matrix • Computed as L = D-A §nxn symmetric matrix §x=(1,…,1) is a trivial eigenvector since Lx=0 so λ1=0 • Important properties of L §Eigenvalues are non-negative real numbers §Eigenvectors are real and orthogonal 66 1 2 3 4 5 6 1 3 -1 -1 0 -1 0 2 -1 2 -1 0 0 0 3 -1 -1 3 -1 0 0 4 0 0 -1 3 -1 -1 5 -1 0 0 -1 3 -1 6 0 0 0 -1 -1 2
  • 66. Prof. Pier Luca Lanzi λ2 as optimization problem • For symmetric matrix M, • What is the meaning of xTLx on G? We can show that, • So that, considering that the second eigenvector x is the unit vector, and x is orthogonal to the unit vector (1, …, 1) 67
  • 67. Prof. Pier Luca Lanzi λ2 as optimization problem • So that, considering that the second eigenvector x is the unit vector, and x is orthogonal to the unit vector (1, …, 1) • Such that, 68
  • 68. Prof. Pier Luca Lanzi 0 x λ2 and its eigenvector x balance to minimize xi xj
  • 69. Prof. Pier Luca Lanzi Finding the Optimal Cut • Express the partition (A,B) as a vector y where, §yi = +1 if node i belongs to A §yi = -1 if node i belongs to B • We can minimize the cut of the partition by finding a non-trivial vector that minimizes 70 Can’t solve exactly! Let’s relax y and allow y to take any real value.
  • 70. Prof. Pier Luca Lanzi Rayleigh Theorem • We know that, • The minimum value of f(y) is given by the second smallest eigenvalue λ2 of the Laplacian matrix L • Thus, the optimal solution for y is given by the corresponding eigenvector x, referred as the Fiedler vector 71
  • 71. Prof. Pier Luca Lanzi Spectral Clustering Algorithms 1. Pre-processing §Construct a matrix representation of the graph 2. Decomposition §Compute eigenvalues and eigenvectors of the matrix §Map each point to a lower-dimensional representation based on one or more eigenvectors 3. Grouping §Assign points to two or more clusters, based on the new representation 72
  • 72. Prof. Pier Luca Lanzi Spectral Partitioning Algorithm • Pre-processing: §Build Laplacian matrix L of the graph • Decomposition: §Find eigenvalues l and eigenvectors x of the matrix L §Map vertices to corresponding components of l2 73 0.0-0.4-0.40.4-0.60.4 0.50.4-0.2-0.5-0.30.4 -0.50.40.60.1-0.30.4 0.5-0.40.60.10.30.4 0.00.4-0.40.40.60.4 -0.5-0.4-0.2-0.50.30.4 5.0 4.0 3.0 3.0 1.0 0.0 l= X = -0.66 -0.35 -0.34 0.33 0.62 0.31 1 2 3 4 5 6 1 3 -1 -1 0 -1 0 2 -1 2 -1 0 0 0 3 -1 -1 3 -1 0 0 4 0 0 -1 3 -1 -1 5 -1 0 0 -1 3 -1 6 0 0 0 -1 -1 2
  • 73. Prof. Pier Luca Lanzi Spectral Partitioning Algorithm • Grouping: §Sort components of reduced 1-dimensional vector §Identify clusters by splitting the sorted vector in two • How to choose a splitting point? §Naïve approaches: split at 0 or median value §More expensive approaches: Attempt to minimize normalized cut in 1-dimension (sweep over ordering of nodes induced by the eigenvector) 74 74 -0.66 -0.35 -0.34 0.33 0.62 0.31 Split at 0: Cluster A: Positive points Cluster B: Negative points 0.33 0.62 0.31 -0.66 -0.35 -0.34 A B
  • 74. Prof. Pier Luca Lanzi Example: Spectral Partitioning 75 Rank in x2 Valueofx2
  • 75. Prof. Pier Luca Lanzi Example: Spectral Partitioning 76 Rank in x2 Valueofx2 Components of x2 76
  • 76. Prof. Pier Luca Lanzi Example: Spectral partitioning 77 Components of x1 Components of x3 77
  • 77. Prof. Pier Luca Lanzi Example using R
  • 78. Prof. Pier Luca Lanzi Example #1 ComputeDegreeMatrix <- function(v) { n <- max(dim(v)) m <- diag(n) d = v %*% rep(1,n) for(i in 1:n ) { m[i,i] = d[i,1] } return(m) } ComputeLaplacianMatrix <- function(v) { D = ComputeDegreeMatrix(v) L = D-v return(L) } 79
  • 79. Prof. Pier Luca Lanzi Example #1 #### 2 ----6 #### / | #### 1 4 | #### / | #### 3 5 G = rbind( c(0, 1, 1, 0, 0, 0), c(1, 0, 0, 1, 0, 1), c(1, 0, 0, 1, 0, 0), c(0, 1, 1, 0, 1, 0), c(0, 0, 0, 1, 0, 1), c(0, 1, 0, 0, 1, 0) ) L = ComputeLaplacianMatrix(G) E = eigen(L) second_eigen_value = (E$value)[max(dim(L))-1] second_eigen_vector = (E$vectors)[,max(dim(L))-1] 5.000000e-01 1.165734e-15 5.000000e-01 4.718448e-16 -5.000000e-01 - 5.000000e-01 80
  • 80. Prof. Pier Luca Lanzi Example #2 G = rbind( c(0, 0, 1, 0, 0, 1, 0, 0), c(0, 0, 0, 0, 1, 0, 0, 1), c(1, 0, 0, 1, 0, 1, 0, 0), c(0, 0, 1, 0, 1, 0, 1, 0), c(0, 1, 0, 1, 0, 0, 0, 1), c(1, 0, 1, 0, 0, 0, 1, 0), c(0, 0, 0, 1, 0, 1, 0, 1), c(0, 1, 0, 0, 1, 0, 1, 0) ) L = ComputeLaplacianMatrix(G) E = eigen(L) second_eigen_value = (E$value)[max(dim(L))-1] second_eigen_vector = (E$vectors)[,max(dim(L))-1] 5.000000e-01 -5.000000e-01 3.535534e-01 -3.885781e-16 -3.535534e-01 3.535534e-01 -4.996004e-16 -3.535534e-01 81
  • 81. Prof. Pier Luca Lanzi How Do We Partition a Graph into k Clusters? • Two basic approaches: • Recursive bi-partitioning [Hagen et al., ’92] §Recursively apply bi-partitioning algorithm in a hierarchical divisive manner §Disadvantages: inefficient, unstable • Cluster multiple eigenvectors [Shi-Malik, ’00] §Build a reduced space from multiple eigenvectors §Commonly used in recent papers §A preferable approach… 82 82
  • 82. Prof. Pier Luca Lanzi Why Use Multiple Eigenvectors? • Approximates the optimal cut [Shi-Malik, ’00] §Can be used to approximate optimal k-way normalized cut • Emphasizes cohesive clusters §Increases the unevenness in the distribution of the data §Associations between similar points are amplified, associations between dissimilar points are attenuated §The data begins to “approximate a clustering” • Well-separated space §Transforms data to a new “embedded space”, consisting of k orthogonal basis vectors • Multiple eigenvectors prevent instability due to information loss 83
  • 83. Prof. Pier Luca Lanzi Searching for Small Communities (Trawling)
  • 84. Prof. Pier Luca Lanzi Searching for small communities in the Web graph (Trawling) • Trawling § What is the signature of a community in a Web graph? § The underlying intuition, that small communities involve many people talking about the same things § Use this to define “topics”: what the same people on the left talk about on the right? • More formally § Enumerate complete bipartite subgraphs Ks,t § Ks,t has s nodes on the “left” and t nodes on the “right” § The left nodes link to the same node of on the right, forming a fully connected bipartite graph 85 [Kumar et al. ‘99] Dense 2-layer … … K3,4 X Y
  • 85. Prof. Pier Luca Lanzi Mining Bipartite Ks,t using Frequent Itemsets • Searching for such complete bipartite graphs can be viewed as a frequent itemset mining problem • View each node i as a set Si of nodes i points to • Ks,t = a set Y of size t that occurs in s sets Si • Looking for Ks,t is equivalento to settting the frequency threshold to s and look at layer t (i.e., all frequent sets of size t) 86 [Kumar et al. ‘99] i b c d a Si={a,b,c,d} j i k b c d a X Y s = minimum support (|X|=s) t = itemset size (|Y|=t)
  • 86. Prof. Pier Luca Lanzi i b c d a Si={a,b,c,d} x y z b c a X Y Find frequent itemsets: s … minimum support t … itemset size We found Ks,t! Ks,t = a set Y of size t that occurs in s sets Si View each node i as a set Si of nodes i points to Say we find a frequent itemset Y={a,b,c} of supp s; so, there are s nodes that link to all of {a,b,c}: x b c a z a b c y b c a
  • 87. Prof. Pier Luca Lanzi Example • Support threshold s=2 §{b,d}: support 3 §{e,f}: support 2 • And we just found 2 bipartite subgraphs: 88 c a b d f e c a b d e c d f e • Itemsets § a = {b,c,d} § b = {d} § c = {b,d,e,f} § d = {e,f} § e = {b,d} § f = {}
  • 88. Prof. Pier Luca Lanzi Run the Python notebooks for this lecture