main content

benchmarking a\b -凯发k8网页登录

this example shows how to benchmark solving a linear system on a cluster. the matlab® code to solve for x in a*x = b is very simple. most frequently, one uses matrix left division, also known as mldivide or the backslash operator (\), to calculate x (that is, x = a\b). benchmarking the performance of matrix left division on a cluster, however, is not as straightforward.

one of the most challenging aspects of benchmarking is to avoid falling into the trap of looking for a single number that represents the overall performance of the system. we will look at the performance curves that might help you identify the performance bottlenecks on your cluster, and maybe even help you see how to benchmark your code and be able to draw meaningful conclusions from the results.

related examples:

the code shown in this example can be found in this function:

function results = paralleldemo_backslash_bench(memoryperworker)

it is very important to choose the appropriate matrix size for the cluster. we can do this by specifying the amount of system memory in gb available to each worker as an input to this example function. the default value is very conservative; you should specify a value that is appropriate for your system.

if nargin == 0
    memoryperworker = 8.00; % in gb
%    warning('pctexample:backslashbench:backslashbenchusingdefaultmemory', ...
%            ['amount of system memory available to each worker is ', ...
%             'not specified.  using the conservative default value ', ...
%             'of %.2f gigabytes per worker.'], memoryperworker);
end

avoiding overhead

to get an accurate measure of our capability to solve linear systems, we need to remove any possible source of overhead. this includes getting the current parallel pool and temporarily disabling the deadlock detection capabilities.

p = gcp;
if isempty(p)
    error('pctexample:backslashbench:poolclosed', ...
        ['this example requires a parallel pool. ' ...
         'manually start a pool using the parpool command or set ' ...
         'your parallel preferences to automatically start a pool.']);
end
poolsize = p.numworkers;
pctrunonall 'mpisettings(''deadlockdetection'', ''off'');'
starting parallel pool (parpool) using the 'bigmjs' profile ... connected to 12 workers.

the benchmarking function

we want to benchmark matrix left division (\), and not the cost of entering an spmd block, the time it takes to create a matrix, or other parameters. we therefore separate the data generation from the solving of the linear system, and measure only the time it takes to do the latter. we generate the input data using the 2-d block-cyclic codistributor, as that is the most effective distribution scheme for solving a linear system. our benchmarking then consists of measuring the time it takes all the workers to complete solving the linear system a*x = b. again, we try to remove any possible source of overhead.

function [a, b] = getdata(n)
    fprintf('creating a matrix of size %d-by-%d.\n', n, n);
    spmd
        % use the codistributor that usually gives the best performance
        % for solving linear systems.
        codistr = codistributor2dbc(codistributor2dbc.defaultlabgrid, ...
                                    codistributor2dbc.defaultblocksize, ...
                                    'col');
        a = codistributed.rand(n, n, codistr);
        b = codistributed.rand(n, 1, codistr);
    end
end
function time = timesolve(a, b)
    spmd
        tic;
        x = a\b; %#ok we don't need the value of x.
        time = gop(@max, toc); % time for all to complete.
    end
    time = time{1};
end

choosing problem size

just like with a great number of other parallel algorithms, the performance of solving a linear system in parallel depends greatly on the matrix size. our a priori expectations are therefore that the computations be:

  • somewhat inefficient for small matrices

  • quite efficient for large matrices

  • inefficient if the matrices are too large to fit into system memory and the operating systems start swapping memory to disk

it is therefore important to time the computations for a number of different matrix sizes to gain an understanding of what "small," "large," and "too large" mean in this context. based on previous experiments, we expect:

  • "too small" matrices to be of size 1000-by-1000

  • "large" matrices to occupy slightly less than 45% of the memory available to each worker

  • "too large" matrices occupy 50% or more of system memory available to each worker

these are heuristics, and the precise values may change between releases. it is therefore important that we use matrix sizes that span this entire range and verify the expected performance.

notice that by changing the problem size according to the number of workers, we employ weak scaling. other benchmarking examples, such as and , also employ weak scaling. as those examples benchmark task parallel computations, their weak scaling consists of making the number of iterations proportional to the number of workers. this example, however, is benchmarking data parallel computations, so we relate the upper size limit of the matrices to the number of workers.

% declare the matrix sizes ranging from 1000-by-1000 up to 45% of system
% memory available to each worker.
maxmemusageperworker = 0.45*memoryperworker*1024^3; % in bytes.
maxmatsize = round(sqrt(maxmemusageperworker*poolsize/8));
matsize = round(linspace(1000, maxmatsize, 5));

comparing performance: gigaflops

we use the number of floating point operations per second as our measure of performance because that allows us to compare the performance of the algorithm for different matrix sizes and different number of workers. if we are successful in testing the performance of matrix left division for a sufficiently wide range of matrix sizes, we expect the performance graph to look similar to the following:

by generating graphs such as these, we can answer questions such as:

  • are the smallest matrices so small that we get poor performance?

  • do we see a performance decrease when the matrix is so large that it occupies 45% of total system memory?

  • what is the best performance we can possibly achieve for a given number of workers?

  • for which matrix sizes do 16 workers perform better than 8 workers?

  • is the system memory limiting the peak performance?

given a matrix size, the benchmarking function creates the matrix a and the right-hand side b once, and then solves a\b multiple times to get an accurate measure of the time it takes. we use the floating operations count of the hpc challenge, so that for an n-by-n matrix, we count the floating point operations as 2/3*n^3 3/2*n^2.

function gflops = benchfcn(n)
    numreps = 3;
    [a, b] = getdata(n);
    time = inf;
    % we solve the linear system a few times and calculate the gigaflops
    % based on the best time.
    for itr = 1:numreps
        tcurr = timesolve(a, b);
        if itr == 1
            fprintf('execution times: %f', tcurr);
        else
            fprintf(', %f', tcurr);
        end
        time = min(tcurr, time);
    end
    fprintf('\n');
    flop = 2/3*n^3   3/2*n^2;
    gflops = flop/time/1e9;
end

executing the benchmarks

having done all the setup, it is straightforward to execute the benchmarks. however, the computations may take a long time to complete, so we print some intermediate status information as we complete the benchmarking for each matrix size.

fprintf(['starting benchmarks with %d different matrix sizes ranging\n' ...
         'from %d-by-%d to %d-by-%d.\n'], ...
        length(matsize), matsize(1), matsize(1), matsize(end), ...
        matsize(end));
gflops = zeros(size(matsize));
for i = 1:length(matsize)
    gflops(i) = benchfcn(matsize(i));
    fprintf('gigaflops: %f\n\n', gflops(i));
end
results.matsize = matsize;
results.gflops = gflops;
starting benchmarks with 5 different matrix sizes ranging
from 1000-by-1000 to 76146-by-76146.
creating a matrix of size 1000-by-1000.
analyzing and transferring files to the workers ...done.
execution times: 1.038931, 0.592114, 0.575135
gigaflops: 1.161756
creating a matrix of size 19787-by-19787.
execution times: 119.402579, 118.087116, 119.323904
gigaflops: 43.741681
creating a matrix of size 38573-by-38573.
execution times: 552.256063, 549.088060, 555.753578
gigaflops: 69.685485
creating a matrix of size 57360-by-57360.
execution times: 3580.232186, 3726.588242, 3113.261810
gigaflops: 40.414533
creating a matrix of size 76146-by-76146.
execution times: 9261.720799, 9099.777287, 7968.750495
gigaflops: 36.937936

plotting the performance

we can now plot the results, and compare to the expected graph shown above.

fig = figure;
ax = axes('parent', fig);
plot(ax, matsize/1000, gflops);
lines = ax.children;
lines.marker = ' ';
ylabel(ax, 'gigaflops')
xlabel(ax, 'matrix size in thousands')
titlestr = sprintf(['solving a\\b for different matrix sizes on ' ...
                    '%d workers'], poolsize);
title(ax, titlestr, 'interpreter', 'none');

if the benchmark results are not as good as you might expect, here are some things to consider:

  • the underlying implementation is using scalapack, which has a proven reputation of high performance. it is therefore very unlikely that the algorithm or the library is causing inefficiencies, but rather the way in which it is used, as described in the items below.

  • if the matrices are too small or too large for your cluster, the resulting performance will be poor.

  • if the network communications are slow, performance will be severely impacted.

  • if the cpus and the network communications are both very fast, but the amount of memory is limited, it is possible you are not able to benchmark with sufficiently large matrices to fully utilize the available cpus and network bandwidth.

  • for ultimate performance, it is important to use a version of mpi that is tailored for your networking setup, and have the workers running in such a manner that as much of the communication happens through shared memory as possible. it is, however, beyond the scope of this example to explain how to identify and solve those types of problems.

comparing different numbers of workers

we now look at how to compare different numbers of workers by viewing data obtained by running this example using different numbers of workers. this data is obtained on a different cluster from the one above.

other examples such as have explained that when benchmarking parallel algorithms for different numbers of workers, one usually employs weak scaling. that is, as we increase the number of workers, we increase the problem size proportionally. in the case of matrix left division, we have to show additional care because the performance of the division depends greatly on the size of the matrix. the following code creates a graph of the performance in gigaflops for all of the matrix sizes that we tested with and all the different numbers of workers, as that gives us the most detailed picture of the performance characteristics of matrix left division on this particular cluster.

s = load('pctdemo_data_backslash.mat', 'workers4', 'workers8', ...
         'workers16', 'workers32', 'workers64');
fig = figure;
ax = axes('parent', fig);
plot(ax, s.workers4.matsize./1000, s.workers4.gflops, ...
     s.workers8.matsize./1000, s.workers8.gflops, ...
     s.workers16.matsize./1000, s.workers16.gflops, ...
     s.workers32.matsize./1000, s.workers32.gflops, ...
     s.workers64.matsize./1000, s.workers64.gflops);
lines = ax.children;
set(lines, {'marker'}, {' '; 'o'; 'v'; '.'; '*'});
ylabel(ax, 'gigaflops')
xlabel(ax, 'matrix size in thousands')
title(ax, ...
      'comparison data for solving a\\b on different numbers of workers');
legend('4 workers', '8 workers', '16 workers', '32 workers',  ...
       '64 workers', 'location', 'northwest');

the first thing we notice when looking at the graph above is that 64 workers allow us to solve much larger linear systems of equations than is possible with only 4 workers. additionally, we can see that even if one could work with a matrix of size 60,000-by-60,000 on 4 workers, we would get a performance of approximately only 10 gigaflops. thus, even if the 4 workers had sufficient memory to solve such a large problem, 64 workers would nevertheless greatly outperform them.

looking at the slope of the curve for 4 workers, we can see that there is only a modest performance increase between the three largest matrix sizes. comparing this with the earlier graph of the expected performance of a\b for different matrix sizes, we conclude that we are quite close to achieving peak performance for 4 workers with matrix size of 7772-by-7772.

looking at the curve for 8 and 16 workers, we can see that the performance drops for the largest matrix size, indicating that we are near or already have exhausted available system memory. however, we see that the performance increase between the second and third largest matrix sizes is very modest, indicating stability of some sort. we therefore conjecture that when working with 8 or 16 workers, we would most likely not see a significant increase in the gigaflops if we increased the system memory and tested with larger matrix sizes.

looking at the curves for 32 and 64 workers, we see that there is a significant performance increase between the second and third largest matrix sizes. for 64 workers, there is also a significant performance increase between the two largest matrix sizes. we therefore conjecture that we run out of system memory for 32 and 64 workers before we have reached peak performance. if that is correct, then adding more memory to the computers would both allow us to solve larger problems and perform better at those larger matrix sizes.

speedup

the traditional way of measuring speedup obtained with linear algebra algorithms such as backslash is to compare the peak performance. we therefore calculate the maximum number of gigaflops achieved for each number of workers.

peakperf = [max(s.workers4.gflops), max(s.workers8.gflops), ...
            max(s.workers16.gflops), max(s.workers32.gflops), ...
            max(s.workers64.gflops)];
disp('peak performance in gigaflops for 4-64 workers:')
disp(peakperf)
disp('speedup when going from 4 workers to 8, 16, 32 and 64 workers:')
disp(peakperf(2:end)/peakperf(1))
peak performance in gigaflops for 4-64 workers:
   10.9319   23.2508   40.7157   73.5109  147.0693
speedup when going from 4 workers to 8, 16, 32 and 64 workers:
    2.1269    3.7245    6.7244   13.4532

we therefore conclude that we get a speedup of approximately 13.5 when increasing the number of workers 16 fold, going from 4 workers to 64. as we noted above, the performance graph indicates that we might be able to increase the performance on 64 workers (and thereby improve the speedup even further), by increasing the system memory on the cluster computers.

the cluster used

this data was generated using 16 dual-processor, octa-core computers, each with 64 gb of memory, connected with gigabit ethernet. when using 4 workers, they were all on a single computer. we used 2 computers for 8 workers, 4 computers for 16 workers, etc.

re-enabling the deadlock detection

now that we have concluded our benchmarking, we can safely re-enable the deadlock detection in the current parallel pool.

pctrunonall 'mpisettings(''deadlockdetection'', ''on'');'
end
ans = 
  struct with fields:
    matsize: [1000 19787 38573 57360 76146]
     gflops: [1.1618 43.7417 69.6855 40.4145 36.9379]
网站地图