main content

analyze and model data on gpu -凯发k8网页登录

this example shows how to improve code performance by executing on a graphical processing unit (gpu). execution on a gpu can improve performance if:

  • your code is computationally expensive, where computing time significantly exceeds the time spent transferring data to and from gpu memory.

  • your workflow uses functions with gpuarray (parallel computing toolbox) support and large array inputs.

when writing code for a gpu, start with code that already performs well on a cpu. vectorization is usually critical for achieving high performance on a gpu. convert code to use functions that support gpu array arguments and transfer the input data to the gpu. for more information about matlab functions with gpu array inputs, see run matlab functions on a gpu (parallel computing toolbox).

many functions in statistics and machine learning toolbox™ automatically execute on a gpu when you use gpu array input data. for example, you can create a probability distribution object on a gpu, where the output is a gpu array.

pd = fitdist(gpuarray(x),"normal")

using a gpu requires parallel computing toolbox™ and a supported gpu device. for information about supported devices, see gpu computing requirements (parallel computing toolbox). for the complete list of statistics and machine learning toolbox™ functions that accept gpu arrays, see and then, in the left navigation bar, scroll to the extended capability section and select gpu arrays.

examine properties of gpu

you can query and select your gpu device using the gpudevice function. if you have multiple gpus, you can examine the properties of all gpus detected in your system by using the gpudevicetable function. then, you can select a specific gpu for single-gpu execution by using its index (gpudevice(index)).

d = gpudevice
d = 
  cudadevice with properties:
                      name: 'titan v'
                     index: 1
         computecapability: '7.0'
            supportsdouble: 1
             driverversion: 11.2000
            toolkitversion: 11.2000
        maxthreadsperblock: 1024
          maxshmemperblock: 49152 (49.15 kb)
        maxthreadblocksize: [1024 1024 64]
               maxgridsize: [2.1475e 09 65535 65535]
                 simdwidth: 32
               totalmemory: 12652838912 (12.65 gb)
           availablememory: 12096045056 (12.10 gb)
       multiprocessorcount: 80
              clockratekhz: 1455000
               computemode: 'default'
      gpuoverlapstransfers: 1
    kernelexecutiontimeout: 0
          canmaphostmemory: 1
           devicesupported: 1
           deviceavailable: 1
            deviceselected: 1

execute function on gpu

explore a data distribution on a gpu using descriptive statistics.

generate a data set of normally distributed random numbers on a gpu.

dist = randn(6e4,6e3,"gpuarray");

determine whether dist is a gpu array.

tf = isgpuarray(dist)
tf = logical
   1

execute a function with a gpu array input argument. for example, calculate the sample skewness for each column in dist. because dist is a gpu array, the skewness function executes on the gpu and returns the result as a gpu array.

skew = skewness(dist);

verify that the output skew is a gpu array.

tf = isgpuarray(skew)
tf = logical
   1

evaluate speedup of gpu execution

evaluate function execution time on the gpu and compare performance with execution on a cpu.

comparing the time taken to execute code on a cpu and a gpu can be useful in determining the appropriate execution environment. for example, if you want to compute descriptive statistics from sample data, considering the execution time and the data transfer time is important to evaluating the overall performance. if a function has gpu array support, as the number of observations increases, computation on the gpu generally improves compared to the cpu.

measure the function run time in seconds by using the (parallel computing toolbox) function. gputimeit is preferable to timeit for functions that use a gpu, because it ensures operation completion and compensates for overhead.

skew = @() skewness(dist);
t = gputimeit(skew)
t = 0.2458

evaluate the performance difference between the gpu and cpu by independently measuring the cpu execution time. in this case, execution of the code is faster on the gpu than on the cpu.

the performance of code on a gpu is heavily dependent on the gpu used. for additional information about measuring and improving gpu performance, see (parallel computing toolbox).

single precision on gpu

you can improve the performance of your code by calculating in single precision instead of double precision.

determine the execution time of the skewness function using an input argument of the dist data set in single precision.

dist_single = single(dist);
skew_single = @() skewness(dist_single);
t_single = gputimeit(skew_single)
t_single = 0.0503

in this case, execution of the code with single precision data is faster than execution with double precision data.

the performance improvement is dependent on the gpu card and total number of cores. for more information about using single precision with a gpu, see (parallel computing toolbox).

dimensionality reduction and model fitting on gpu

implement dimensionality reduction and classification workflows on a gpu.

functions such as and fitcensemble can be used together to train a machine learning model.

  • the pca (principal component analysis) function reduces data dimensionality by replacing several correlated variables with a new set of variables that are linear combinations of the original variables.

  • the fitcensemble function fits many classification learners to form an ensemble model that can make better predictions than a single learner.

both functions are computationally intensive and can be significantly accelerated using a gpu.

for example, consider the humanactivity data set. the data set contains 24,075 observations of five physical human activities: sitting, standing, walking, running, and dancing. each observation has 60 features extracted from acceleration data measured by smartphone accelerometer sensors. the data set contains the following variables:

  • actid — response vector containing the activity ids in integers: 1, 2, 3, 4, and 5 representing sitting, standing, walking, running, and dancing, respectively

  • actnames — activity names corresponding to the integer activity ids

  • feat — feature matrix of 60 features for 24,075 observations

  • featlabels — labels of the 60 features

load humanactivity

use 90% of the observations to train a model that classifies the five types of human activities, and use 10% of the observations to validate the trained model. specify a 10% holdout for the test set by using cvpartition.

partition = cvpartition(actid,"holdout",0.10);
traininginds = training(partition); % indices for the training set
testinds = test(partition); % indices for the test set

transfer the training and test data to the gpu.

xtrain = gpuarray(feat(traininginds,:));
ytrain = gpuarray(actid(traininginds));
xtest = gpuarray(feat(testinds,:));
ytest = gpuarray(actid(testinds));

find the principal components for the training data set xtrain.

[coeff,score,~,~,explained,mu] = pca(xtrain);

find the number of components required to explain at least 99% of variability.

idx = find(cumsum(explained)>99,1);

determine the principal component scores that represent x in the principal component space.

xtrainpca = score(:,1:idx);

fit an ensemble of learners for classification.

template = templatetree("maxnumsplits",20,"reproducible",true);
classificationensemble = fitcensemble(xtrainpca,ytrain, ...
    "method","adaboostm2", ...
    "numlearningcycles",30, ...
    "learners",template, ...
    "learnrate",0.1, ...
    "classnames",[1; 2; 3; 4; 5]);

to use the trained model for the test set, you need to transform the test data set by using the pca obtained from the training data set.

xtestpca = (xtest-mu)*coeff(:,1:idx);

evaluate the accuracy of the trained classifier with the test data.

classificationerror = loss(classificationensemble,xtestpca,ytest);

transfer to local workspace

transfer data or model properties from a gpu to the local workspace for use with a function that does not support gpu arrays.

transferring gpu arrays can be costly and is generally not necessary unless you need to use the results with functions that do not support gpu arrays, or use the results in another workspace where a gpu is unavailable.

the (parallel computing toolbox) function transfers data from the gpu into the local workspace. gather the dist data, and then confirm that the data is no longer a gpu array.

dist = gather(dist);
tf = isgpuarray(dist)
tf = logical
   0

the function transfers properties of a machine learning model from a gpu into the local workspace. gather the classificationensemble model, and then confirm that the model properties that were previously gpu arrays, such as x, are no longer gpu arrays.

classificationensemble = gather(classificationensemble);
tf = isgpuarray(classificationensemble.x)
tf = logical
   0

see also

(parallel computing toolbox) | (parallel computing toolbox) | (parallel computing toolbox)

related topics

网站地图