choose cluster analysis method -凯发k8网页登录

main content

choose cluster analysis method

this topic provides a brief overview of the available clustering methods in statistics and machine learning toolbox™.

clustering methods

cluster analysis, also called segmentation analysis or taxonomy analysis, is a common unsupervised learning method. unsupervised learning is used to draw inferences from data sets consisting of input data without labeled responses. for example, you can use cluster analysis for exploratory data analysis to find hidden patterns or groupings in unlabeled data.

cluster analysis creates groups, or clusters, of data. objects that belong to the same cluster are similar to one another and distinct from objects that belong to different clusters. to quantify "similar" and "distinct," you can use a dissimilarity measure (or ) that is specific to the domain of your application and your data set. also, depending on your application, you might consider scaling (or standardizing) the variables in your data to give them equal importance during clustering.

statistics and machine learning toolbox provides functionality for these clustering methods:

hierarchical clustering

hierarchical clustering groups data over a variety of scales by creating a cluster tree, or dendrogram. the tree is not a single set of clusters, but rather a multilevel hierarchy, where clusters at one level combine to form clusters at the next level. this multilevel hierarchy allows you to choose the level, or scale, of clustering that is most appropriate for your application. hierarchical clustering assigns every point in your data to a cluster.

use to perform hierarchical clustering on input data. clusterdata incorporates the , , and functions, which you can use separately for more detailed analysis. the function plots the cluster tree. for more information, see introduction to hierarchical clustering.

k-means and k-medoids clustering

k-means clustering and k-medoids clustering partition data into k mutually exclusive clusters. these clustering methods require that you specify the number of clusters k. both k-means and k-medoids clustering assign every point in your data to a cluster; however, unlike hierarchical clustering, these methods operate on actual observations (rather than dissimilarity measures), and create a single level of clusters. therefore, k-means or k-medoids clustering is often more suitable than hierarchical clustering for large amounts of data.

use kmeans and to implement k-means clustering and k-medoids clustering, respectively. for more information, see and .

density-based spatial clustering of applications with noise (dbscan)

dbscan is a density-based algorithm that identifies arbitrarily shaped clusters and outliers (noise) in data. during clustering, dbscan identifies points that do not belong to any cluster, which makes this method useful for density-based outlier detection. unlike k-means and k-medoids clustering, dbscan does not require prior knowledge of the number of clusters.

use dbscan to perform clustering on an input data matrix or on pairwise distances between observations. for more information, see .

gaussian mixture model

a gaussian mixture model (gmm) forms clusters as a mixture of multivariate normal density components. for a given observation, the gmm assigns posterior probabilities to each component density (or cluster). the posterior probabilities indicate that the observation has some probability of belonging to each cluster. a gmm can perform hard clustering by selecting the component that maximizes the posterior probability as the assigned cluster for the observation. you can also use a gmm to perform soft, or fuzzy, clustering by assigning the observation to multiple clusters based on the scores or posterior probabilities of the observation for the clusters. a gmm can be a more appropriate method than k-means clustering when clusters have different sizes and different correlation structures within them.

use to fit a object to your data. you can also use to create a gmm object by specifying the distribution parameters. when you have a fitted gmm, you can cluster query data by using the function. for more information, see .

k-nearest neighbor search and radius search

k-nearest neighbor search finds the k closest points in your data to a query point or set of query points. in contrast, radius search finds all points in your data that are within a specified distance from a query point or set of query points. the results of these methods depend on the that you specify.

use the function to find k-nearest neighbors or the function to find all neighbors within a specified distance of your input data. you can also create a searcher object using a training data set, and pass the object and query data sets to the object functions ( and ). for more information, see .

spectral clustering

spectral clustering is a graph-based algorithm for finding k arbitrarily shaped clusters in data. the technique involves representing the data in a low dimension. in the low dimension, clusters in the data are more widely separated, enabling you to use algorithms such as k-means or k-medoids clustering. this low dimension is based on eigenvectors of a laplacian matrix. a laplacian matrix is one way of representing a similarity graph that models the local neighborhood relationships between data points as an undirected graph.

use to perform spectral clustering on an input data matrix or on a similarity matrix of a similarity graph. requires that you specify the number of clusters. however, the algorithm for spectral clustering also provides a way to estimate the number of clusters in your data. for more information, see .

comparison of clustering methods

this table compares the features of available clustering methods in statistics and machine learning toolbox.

methodbasis of algorithminput to algorithmrequires specified number of clusterscluster shapes identifieduseful for outlier detection
hierarchical clusteringdistance between objectspairwise distances between observationsnoarbitrarily shaped clusters, depending on the specified algorithmno
and distance between objects and centroidsactual observationsyesspheroidal clusters with equal diagonal covarianceno
density-based spatial clustering of applications with noise ()density of regions in the dataactual observations or pairwise distances between observationsnoarbitrarily shaped clustersyes
mixture of gaussian distributionsactual observationsyesspheroidal clusters with different covariance structuresyes
nearest neighborsdistance between objectsactual observationsnoarbitrarily shaped clustersyes, depending on the specified number of neighbors
spectral clustering ()graph representing connections between data pointsactual observations or similarity matrixyes, but the algorithm also provides a way to estimate the number of clustersarbitrarily shaped clustersno

related topics

网站地图