main content

comparison of six solvers -凯发k8网页登录

function to optimize

this example shows how to minimize rastrigin’s function with six solvers. each solver has its own characteristics. the characteristics lead to different solutions and run times. the results, examined in , can help you choose an appropriate solver for your own problems.

rastrigin’s function has many local minima, with a global minimum at (0,0). the function is defined as ras(x):

ras(x)=20 x12 x22-10(cos2πx1 cos2πx2).

the rastriginsfcn.m file, which computes the values of rastrigin's function, is available when you run this example. this example employs a scaled version of rastrigin’s function with larger basins of attraction. for information, see . create a surface plot of the scaled function.

rf2 = @(x)rastriginsfcn(x/10);
rf3 = @(x,y)reshape(rastriginsfcn([x(:)/10,y(:)/10]),size(x));
fsurf(rf3,[-30 30],'showcontours','on')
title('rastriginsfcn([x/10,y/10])')
xlabel('x')
ylabel('y')

figure contains an axes object. the axes object with title rastriginsfcn([x/10,y/10]), xlabel x, ylabel y contains an object of type functionsurface.

usually, you don't know the location of the global minimum of your objective function. to show how the solvers look for a global solution, this example starts all the solvers around the point [20,30], which is far from the global minimum.

this example minimizes rf2 using the default settings of fminunc (an optimization toolbox™ solver), patternsearch, and globalsearch. the example also uses ga and particleswarm with nondefault options to start with an initial population around the point [20,30]. because surrogateopt requires finite bounds, the example uses surrogateopt with lower bounds of –70 and upper bounds of 130 in each variable.

six solution methods

fminunc

to solve the optimization problem using the fminunc optimization toolbox solver, enter:

rf2 = @(x)rastriginsfcn(x/10); % objective
x0 = [20,30]; % start point away from the minimum
[xf,ff,flf,of] = fminunc(rf2,x0)
local minimum found.
optimization completed because the size of the gradient is less than
the value of the optimality tolerance.
xf = 1×2
   19.8991   29.8486
ff = 12.9344
flf = 1
of = struct with fields:
       iterations: 3
        funccount: 15
         stepsize: 1.7776e-06
     lssteplength: 1
    firstorderopt: 5.9907e-09
        algorithm: 'quasi-newton'
          message: 'local minimum found....'
  • xf is the minimizing point.

  • ff is the value of the objective, rf2, at xf.

  • flf is the exit flag. an exit flag of 1 indicates xf is a local minimum.

  • of is the output structure, which describes the fminunc calculations leading to the solution.

patternsearch

to solve the optimization problem using the patternsearch global optimization toolbox solver, enter:

rf2 = @(x)rastriginsfcn(x/10); % objective
x0 = [20,30]; % start point away from the minimum
[xp,fp,flp,op] = patternsearch(rf2,x0)
optimization terminated: mesh size less than options.meshtolerance.
xp = 1×2
   19.8991   -9.9496
fp = 4.9748
flp = 1
op = struct with fields:
         function: @(x)rastriginsfcn(x/10)
      problemtype: 'unconstrained'
       pollmethod: 'gpspositivebasis2n'
    maxconstraint: []
     searchmethod: []
       iterations: 48
        funccount: 174
         meshsize: 9.5367e-07
         rngstate: [1x1 struct]
          message: 'optimization terminated: mesh size less than options.meshtolerance.'
  • xp is the minimizing point.

  • fp is the value of the objective, rf2, at xp.

  • flp is the exit flag. an exit flag of 1 indicates xp is a local minimum.

  • op is the output structure, which describes the patternsearch calculations leading to the solution.

ga

to solve the optimization problem using the ga global optimization toolbox solver, enter:

rng default % for reproducibility
rf2 = @(x)rastriginsfcn(x/10); % objective
x0 = [20,30]; % start point away from the minimum
initpop = 10*randn(20,2)   repmat(x0,20,1);
opts = optimoptions('ga','initialpopulationmatrix',initpop);
  • initpop is a 20-by-2 matrix. each row of initpop has mean [20,30], and each element is normally distributed with standard deviation 10. the rows of initpop form an initial population matrix for the ga solver.

  • opts is the options that set initpop as the initial population.

  • ga uses random numbers, and produces a random result.

  • the next line calls ga, using the options.

[xga,fga,flga,oga] = ga(rf2,2,[],[],[],[],[],[],[],opts)
optimization terminated: maximum number of generations exceeded.
xga = 1×2
   -0.0042   -0.0024
fga = 4.7054e-05
flga = 0
oga = struct with fields:
      problemtype: 'unconstrained'
         rngstate: [1x1 struct]
      generations: 200
        funccount: 9453
          message: 'optimization terminated: maximum number of generations exceeded.'
    maxconstraint: []
       hybridflag: []
  • xga is the minimizing point.

  • fga is the value of the objective, rf2, at xga.

  • flga is the exit flag. an exit flag of 0 indicates that ga reaches a function evaluation limit or an iteration limit. in this case, ga reaches an iteration limit.

  • oga is the output structure, which describes the ga calculations leading to the solution.

particleswarm

like ga, particleswarm is a population-based algorithm. so for a fair comparison of solvers, initialize the particle swarm to the same population as ga.

rng default % for reproducibility
rf2 = @(x)rastriginsfcn(x/10); % objective
opts = optimoptions('particleswarm','initialswarmmatrix',initpop);
[xpso,fpso,flgpso,opso] = particleswarm(rf2,2,[],[],opts)
optimization ended: relative change in the objective value 
over the last options.maxstalliterations iterations is less than options.functiontolerance.
xpso = 1×2
    9.9496    0.0000
fpso = 0.9950
flgpso = 1
opso = struct with fields:
      rngstate: [1x1 struct]
    iterations: 56
     funccount: 1140
       message: 'optimization ended: relative change in the objective value ...'
    hybridflag: []

surrogateopt

surrogateopt does not require a start point, but does require finite bounds. set bounds of –70 to 130 in each component. to have the same sort of output as the other solvers, disable the default plot function.

rng default % for reproducibility
lb = [-70,-70];
ub = [130,130];
rf2 = @(x)rastriginsfcn(x/10); % objective
opts = optimoptions('surrogateopt','plotfcn',[]);
[xsur,fsur,flgsur,osur] = surrogateopt(rf2,lb,ub,opts)
surrogateopt stopped because it exceeded the function evaluation limit set by 
'options.maxfunctionevaluations'.
xsur = 1×2
   -1.3383   -0.3022
fsur = 3.5305
flgsur = 0
osur = struct with fields:
        elapsedtime: 8.1641
          funccount: 200
    constrviolation: 0
               ineq: [1x0 double]
           rngstate: [1x1 struct]
            message: 'surrogateopt stopped because it exceeded the function evaluation limit set by ...'
  • xsur is the minimizing point.

  • fsur is the value of the objective, rf2, at xsur.

  • flgsur is the exit flag. an exit flag of 0 indicates that surrogateopt halted because it ran out of function evaluations or time.

  • osur is the output structure, which describes the surrogateopt calculations leading to the solution.

globalsearch

to solve the optimization problem using the globalsearch solver, enter:

rf2 = @(x)rastriginsfcn(x/10); % objective
x0 = [20,30]; % start point away from the minimum
problem = createoptimproblem('fmincon','objective',rf2,...
    'x0',x0);
gs = globalsearch;

problem is an optimization problem structure. problem specifies the fmincon solver, the rf2 objective function, and x0=[20,30]. for more information on using createoptimproblem, see .

note: you must specify fmincon as the solver for globalsearch, even for unconstrained problems.

gs is a default globalsearch object. the object contains options for solving the problem. calling run(gs,problem) runs problem from multiple start points. the start points are random, so the following result is also random.

[xg,fg,flg,og] = run(gs,problem)
globalsearch stopped because it analyzed all the trial points.
all 6 local solver runs converged with a positive local solver exit flag.
xg = 1×2
10-7 ×
   -0.1405   -0.1405
fg = 0
flg = 1
og = struct with fields:
                funccount: 2157
         localsolvertotal: 6
       localsolversuccess: 6
    localsolverincomplete: 0
    localsolvernosolution: 0
                  message: 'globalsearch stopped because it analyzed all the trial points....'
  • xg is the minimizing point.

  • fg is the value of the objective, rf2, at xg.

  • flg is the exit flag. an exit flag of 1 indicates all fmincon runs converged properly.

  • og is the output structure, which describes the globalsearch calculations leading to the solution.

compare syntax and solutions

one solution is better than another if its objective function value is smaller than the other. the following table summarizes the results.

sols = [xf;
    xp;
    xga;
    xpso;
    xsur;
    xg];
fvals = [ff;
    fp;
    fga;
    fpso;
    fsur;
    fg];
fevals = [of.funccount;
    op.funccount;
    oga.funccount;
    opso.funccount;
    osur.funccount;
    og.funccount];
stats = table(sols,fvals,fevals);
stats.properties.rownames = ["fminunc" "patternsearch" "ga" "particleswarm" "surrogateopt" "globalsearch"];
stats.properties.variablenames = ["solution" "objective" "# fevals"];
disp(stats)
                              solution             objective     # fevals
                     __________________________    __________    ________
    fminunc               19.899         29.849        12.934        15  
    patternsearch         19.899        -9.9496        4.9748       174  
    ga                -0.0042178     -0.0024347    4.7054e-05      9453  
    particleswarm         9.9496       6.75e-07       0.99496      1140  
    surrogateopt         -1.3383       -0.30217        3.5305       200  
    globalsearch     -1.4046e-08    -1.4046e-08             0      2157  

these results are typical:

  • fminunc quickly reaches the local solution within its starting basin, but does not explore outside this basin at all. fminunc has a simple calling syntax.

  • patternsearch takes more function evaluations than fminunc, and searches through several basins, arriving at a better solution than fminunc. the patternsearch calling syntax is the same as that of fminunc.

  • ga takes many more function evaluations than patternsearch. by chance it arrives at a better solution. in this case, ga finds a point near the global optimum. ga is stochastic, so its results change with every run. ga has a simple calling syntax, but there are extra steps to have an initial population near [20,30].

  • particleswarm takes fewer function evaluations than ga, but more than patternsearch. in this case, particleswarm finds a point with lower objective function value than patternsearch, but higher than ga. because particleswarm is stochastic, its results change with every run. particleswarm has a simple calling syntax, but there are extra steps to have an initial population near [20,30].

  • surrogateopt stops when it reaches a function evaluation limit, which by default is 200 for a two-variable problem. surrogateopt has a simple calling syntax, but requires finite bounds. surrogateopt attempts to find a global solution, but in this case does not succeed. each function evaluation in surrogateopt takes a longer time than in most other solvers, because surrogateopt performs many auxiliary computations as part of its algorithm.

  • globalsearch run takes the same order of magnitude of function evaluations as ga and particleswarm, searches many basins, and arrives at a good solution. in this case, globalsearch finds the global optimum. setting up globalsearch is more involved than setting up the other solvers. as the example shows, before calling globalsearch, you must create both a globalsearch object (gs in the example), and a problem structure (problem). then, you call the run method with gs and problem. for more details on how to run globalsearch, see .

see also

| | | | |

related topics

网站地图