hierarchical clustering -凯发k8网页登录

main content

hierarchical clustering

introduction to hierarchical clustering

hierarchical clustering groups data over a variety of scales by creating a cluster tree or dendrogram. the tree is not a single set of clusters, but rather a multilevel hierarchy, where clusters at one level are joined as clusters at the next level. this allows you to decide the level or scale of clustering that is most appropriate for your application. the function supports agglomerative clustering and performs all of the necessary steps for you. it incorporates the , , and functions, which you can use separately for more detailed analysis. the function plots the cluster tree.

algorithm description

to perform agglomerative hierarchical cluster analysis on a data set using statistics and machine learning toolbox™ functions, follow this procedure:

  1. find the similarity or dissimilarity between every pair of objects in the data set. in this step, you calculate the distance between objects using the function. the pdist function supports many different ways to compute this measurement. see similarity measures for more information.

  2. group the objects into a binary, hierarchical cluster tree. in this step, you link pairs of objects that are in close proximity using the function. the linkage function uses the distance information generated in step 1 to determine the proximity of objects to each other. as objects are paired into binary clusters, the newly formed clusters are grouped into larger clusters until a hierarchical tree is formed. see linkages for more information.

  3. determine where to cut the hierarchical tree into clusters. in this step, you use the function to prune branches off the bottom of the hierarchical tree, and assign all the objects below each cut to a single cluster. this creates a partition of the data. the cluster function can create these clusters by detecting natural groupings in the hierarchical tree or by cutting off the hierarchical tree at an arbitrary point.

the following sections provide more information about each of these steps.

note

the function performs all of the necessary steps for you. you do not need to execute the , , or functions separately.

similarity measures

you use the function to calculate the distance between every pair of objects in a data set. for a data set made up of m objects, there are m*(m – 1)/2 pairs in the data set. the result of this computation is commonly known as a distance or dissimilarity matrix.

there are many ways to calculate this distance information. by default, the pdist function calculates the euclidean distance between objects; however, you can specify one of several other options. see for more information.

note

you can optionally normalize the values in the data set before calculating the distance information. in a real world data set, variables can be measured against different scales. for example, one variable can measure intelligence quotient (iq) test scores and another variable can measure head circumference. these discrepancies can distort the proximity calculations. using the function, you can convert all the values in the data set to use the same proportional scale. see for more information.

for example, consider a data set, x, made up of five objects where each object is a set of x,y coordinates.

  • object 1: 1, 2

  • object 2: 2.5, 4.5

  • object 3: 2, 2

  • object 4: 4, 1.5

  • object 5: 4, 2.5

you can define this data set as a matrix

rng("default") % for reproducibility
x = [1 2; 2.5 4.5; 2 2; 4 1.5; ...
    4 2.5];

and pass it to . the pdist function calculates the distance between object 1 and object 2, object 1 and object 3, and so on until the distances between all the pairs have been calculated. the following figure plots these objects in a graph. the euclidean distance between object 2 and object 3 is shown to illustrate one interpretation of distance.

diagram showing the euclidean distance between two objects

distance information

the function returns this distance information in a vector, y, where each element contains the distance between a pair of objects.

y = pdist(x)
y =
  columns 1 through 6
    2.9155    1.0000    3.0414    3.0414    2.5495    3.3541
  columns 7 through 10
    2.5000    2.0616    2.0616    1.0000

to make it easier to see the relationship between the distance information generated by pdist and the objects in the original data set, you can reformat the distance vector into a matrix using the function. in this matrix, element i,j corresponds to the distance between object i and object j in the original data set. in the following example, element 1,1 represents the distance between object 1 and itself (which is zero). element 1,2 represents the distance between object 1 and object 2, and so on.

squareform(y)
ans =
         0    2.9155    1.0000    3.0414    3.0414
    2.9155         0    2.5495    3.3541    2.5000
    1.0000    2.5495         0    2.0616    2.0616
    3.0414    3.3541    2.0616         0    1.0000
    3.0414    2.5000    2.0616    1.0000         0

linkages

once the proximity between objects in the data set has been computed, you can determine how objects in the data set should be grouped into clusters, using the function. the linkage function takes the distance information generated by and links pairs of objects that are close together into binary clusters (clusters made up of two objects). the linkage function then links these newly formed clusters to each other and to other objects to create bigger clusters until all the objects in the original data set are linked together in a hierarchical tree.

for example, given the distance vector y generated by pdist from the sample data set of x- and y-coordinates, the linkage function generates a hierarchical cluster tree, returning the linkage information in a matrix, z.

z = linkage(y)
z =
    4.0000    5.0000    1.0000
    1.0000    3.0000    1.0000
    6.0000    7.0000    2.0616
    2.0000    8.0000    2.5000

in this output, each row identifies a link between objects or clusters. the first two columns identify the objects that have been linked. the third column contains the distance between these objects. for the sample data set of x- and y-coordinates, the linkage function begins by grouping objects 4 and 5, which have the closest proximity (distance value = 1.0000). the linkage function continues by grouping objects 1 and 3, which also have a distance value of 1.0000.

the third row indicates that the linkage function grouped objects 6 and 7. if the original sample data set contained only five objects, what are objects 6 and 7? object 6 is the newly formed binary cluster created by the grouping of objects 4 and 5. when the linkage function groups two objects into a new cluster, it must assign the cluster a unique index value, starting with the value m 1, where m is the number of objects in the original data set. (values 1 through m are already used by the original data set.) similarly, object 7 is the cluster formed by grouping objects 1 and 3.

linkage uses distances to determine the order in which it clusters objects. the distance vector y contains the distances between the original objects 1 through 5. but linkage must also be able to determine distances involving clusters that it creates, such as objects 6 and 7. by default, linkage uses a method known as single linkage. however, there are a number of different methods available. see the reference page for more information.

as the final cluster, the linkage function grouped object 8, the newly formed cluster made up of objects 6 and 7, with object 2 from the original data set. the following figure graphically illustrates the way linkage groups the objects into a hierarchy of clusters.

diagram of a hierarchy of clusters

dendrograms

the hierarchical, binary cluster tree created by the linkage function is most easily understood when viewed graphically. the function plots the tree as follows.

dendrogram(z)

plot of a hierarchical, binary cluster tree

in the figure, the numbers along the horizontal axis represent the indices of the objects in the original data set. the links between objects are represented as upside-down u-shaped lines. the height of the u indicates the distance between the objects. for example, the link representing the cluster containing objects 1 and 3 has a height of 1. the link representing the cluster that groups object 2 together with objects 1, 3, 4, and 5, (which are already clustered as object 8) has a height of 2.5. the height represents the distance linkage computes between objects 2 and 8. for more information about creating a dendrogram diagram, see the reference page.

verify the cluster tree

after linking the objects in a data set into a hierarchical cluster tree, you might want to verify that the distances (that is, heights) in the tree reflect the original distances accurately. in addition, you might want to investigate natural divisions that exist among links between objects. statistics and machine learning toolbox functions are available for both of these tasks, as described in the following sections.

verify dissimilarity

in a hierarchical cluster tree, any two objects in the original data set are eventually linked together at some level. the height of the link represents the distance between the two clusters that contain those two objects. this height is known as the cophenetic distance between the two objects. one way to measure how well the cluster tree generated by the function reflects your data is to compare the cophenetic distances with the original distance data generated by the function. if the clustering is valid, the linking of objects in the cluster tree should have a strong correlation with the distances between objects in the distance vector. the function compares these two sets of values and computes their correlation, returning a value called the cophenetic correlation coefficient. the closer the value of the cophenetic correlation coefficient is to 1, the more accurately the clustering solution reflects your data.

you can use the cophenetic correlation coefficient to compare the results of clustering the same data set using different distance calculation methods or clustering algorithms. for example, you can use the cophenet function to evaluate the clusters created for the sample data set.

c = cophenet(z,y)
c =
    0.8615

z is the matrix output by the linkage function and y is the distance vector output by the pdist function.

execute pdist again on the same data set, this time specifying the city block metric. after running the linkage function on this new pdist output using the average linkage method, call cophenet to evaluate the clustering solution.

y = pdist(x,"cityblock");
z = linkage(y,"average");
c = cophenet(z,y)
c =
    0.9047

the cophenetic correlation coefficient shows that using a different distance and linkage method creates a tree that represents the original distances slightly better.

verify consistency

one way to determine the natural cluster divisions in a data set is to compare the height of each link in a cluster tree with the heights of neighboring links below it in the tree.

a link that is approximately the same height as the links below it indicates that there are no distinct divisions between the objects joined at this level of the hierarchy. these links are said to exhibit a high level of consistency, because the distance between the objects being joined is approximately the same as the distances between the objects they contain.

on the other hand, a link whose height differs noticeably from the height of the links below it indicates that the objects joined at this level in the cluster tree are much farther apart from each other than their components were when they were joined. this link is said to be inconsistent with the links below it.

in cluster analysis, inconsistent links can indicate the border of a natural division in a data set. the function uses a quantitative measure of inconsistency to determine where to partition your data set into clusters.

the following dendrogram illustrates inconsistent links. note how the objects in the dendrogram fall into two groups that are connected by links at a much higher level in the tree. these links are inconsistent when compared with the links below them in the hierarchy.

hierarchical cluster tree displaying the difference between links that show consistency and links that show inconsistency

the relative consistency of each link in a hierarchical cluster tree can be quantified and expressed as the inconsistency coefficient. this value compares the height of a link in a cluster hierarchy with the average height of links below it. links that join distinct clusters have a high inconsistency coefficient; links that join indistinct clusters have a low inconsistency coefficient.

to generate a listing of the inconsistency coefficient for each link in the cluster tree, use the function. by default, the inconsistent function compares each link in the cluster hierarchy with adjacent links that are less than two levels below it in the cluster hierarchy. this is called the depth of the comparison. you can also specify other depths. the objects at the bottom of the cluster tree, called leaf nodes, that have no further objects below them, have an inconsistency coefficient of zero. clusters that join two leaves also have a zero inconsistency coefficient.

for example, you can use the inconsistent function to calculate the inconsistency values for the links created by the function in linkages.

first, recompute the distance and linkage values using the default settings.

y = pdist(x);
z = linkage(y);

next, use inconsistent to calculate the inconsistency values.

i = inconsistent(z)
i =
    1.0000         0    1.0000         0
    1.0000         0    1.0000         0
    1.3539    0.6129    3.0000    1.1547
    2.2808    0.3100    2.0000    0.7071

the inconsistent function returns data about the links in an (m-1)-by-4 matrix, whose columns are described in the following table.

columndescription

1

mean of the heights of all the links included in the calculation

2

standard deviation of all the links included in the calculation

3

number of links included in the calculation

4

inconsistency coefficient

in the sample output, the first row represents the link between objects 4 and 5. this cluster is assigned the index 6 by the function. because both 4 and 5 are leaf nodes, the inconsistency coefficient for the cluster is zero. the second row represents the link between objects 1 and 3, both of which are also leaf nodes. this cluster is assigned the index 7 by the linkage function.

the third row evaluates the link that connects these two clusters, objects 6 and 7. (this new cluster is assigned index 8 in the output). column 3 indicates that three links are considered in the calculation: the link itself and the two links directly below it in the hierarchy. column 1 represents the mean of the heights of these links. the inconsistent function uses the height information output by the linkage function to calculate the mean. column 2 represents the standard deviation between the links. the last column contains the inconsistency value for these links, 1.1547. it is the difference between the current link height and the mean, normalized by the standard deviation.

(2.0616 - 1.3539) / 0.6129
ans =
    1.1547

the following figure illustrates the links and heights included in this calculation.

hierarchical cluster tree showing the three links used to compute the third inconsistency coefficient

note

in the preceding figure, the lower limit on the y-axis is set to 0 to show the heights of the links. to set the lower limit to 0, select axes properties from the edit menu, click the y axis tab, and enter 0 in the field immediately to the right of y limits.

row 4 in the output matrix describes the link between object 8 and object 2. column 3 indicates that two links are included in this calculation: the link itself and the link directly below it in the hierarchy. the inconsistency coefficient for this link is 0.7071.

the following figure illustrates the links and heights included in this calculation.

hierarchical cluster tree showing the two links used to compute the fourth inconsistency coefficient

create clusters

after you create the hierarchical tree of binary clusters, you can prune the tree to partition your data into clusters using the function. the cluster function lets you create clusters in two ways, as discussed in the following sections:

find natural divisions in data

the hierarchical cluster tree may naturally divide the data into distinct, well-separated clusters. this can be particularly evident in a dendrogram diagram created from data where groups of objects are densely packed in certain areas and not in others. the inconsistency coefficient of the links in the cluster tree can identify these divisions where the similarities between objects change abruptly. (see verify the cluster tree for more information about the inconsistency coefficient.) you can use this value to determine where the function creates cluster boundaries.

for example, if you use the cluster function to group the sample data set into clusters, specifying an inconsistency coefficient threshold of 1.2 as the value of the cutoff argument, the cluster function groups all the objects in the sample data set into one cluster. in this case, none of the links in the cluster hierarchy had an inconsistency coefficient greater than 1.2.

t = cluster(z,"cutoff",1.2)
t =
     1
     1
     1
     1
     1

the cluster function outputs a vector, t, that is the same size as the original data set. each element in this vector contains the number of the cluster into which the corresponding object from the original data set was placed.

if you lower the inconsistency coefficient threshold to 0.8, the cluster function divides the sample data set into three separate clusters.

t = cluster(z,"cutoff",0.8)
t =
     1
     2
     1
     3
     3

this output indicates that objects 1 and 3 are in one cluster, objects 4 and 5 are in another cluster, and object 2 is in its own cluster.

when clusters are formed in this way, the cutoff value is applied to the inconsistency coefficient. these clusters may, but do not necessarily, correspond to a horizontal slice across the dendrogram at a certain height. if you want clusters corresponding to a horizontal slice of the dendrogram, you can either use the criterion option to specify that the cutoff should be based on distance rather than inconsistency, or you can specify the number of clusters directly as described in the following section.

specify arbitrary clusters

instead of letting the function create clusters determined by the natural divisions in the data set, you can specify the number of clusters you want created.

for example, you can specify that you want the cluster function to partition the sample data set into two clusters. in this case, the cluster function creates one cluster containing objects 1, 3, 4, and 5 and another cluster containing object 2.

t = cluster(z,"maxclust",2)
t =
     2
     1
     2
     2
     2

to help you visualize how the cluster function determines these clusters, the following figure shows the dendrogram of the hierarchical cluster tree. the horizontal dashed line intersects two lines of the dendrogram, corresponding to setting maxclust to 2. these two lines partition the objects into two clusters: the objects below the left-hand line, namely 1, 3, 4, and 5, belong to one cluster, while the object below the right-hand line, namely 2, belongs to the other cluster.

hierarchical cluster tree showing a cutoff value for creating two clusters

on the other hand, if you set maxclust to  3, the cluster function groups objects 4 and 5 in one cluster, objects 1 and 3 in a second cluster, and object 2 in a third cluster. the following command illustrates this.

t = cluster(z,"maxclust",3)
t =
     1
     3
     1
     2
     2

this time, the cluster function cuts off the hierarchy at a lower point, corresponding to the horizontal line that intersects three lines of the dendrogram in the following figure.

hierarchical cluster tree showing a cutoff value for creating three clusters

related topics

网站地图