bayesian optimization algorithm -凯发k8网页登录

main content

bayesian optimization algorithm

algorithm outline

the bayesian optimization algorithm attempts to minimize a scalar objective function f(x) for x in a bounded domain. the function can be deterministic or stochastic, meaning it can return different results when evaluated at the same point x. the components of x can be continuous reals, integers, or categorical, meaning a discrete set of names.

note

throughout this discussion, d represents the number of components of x.

the key elements in the minimization are:

  • a gaussian process model of f(x).

  • a bayesian update procedure for modifying the gaussian process model at each new evaluation of f(x).

  • an acquisition function a(x) (based on the gaussian process model of f) that you maximize to determine the next point x for evaluation. for details, see acquisition function types and acquisition function maximization.

algorithm outline:

  • evaluate yi = f(xi) for numseedpoints points xi, taken at random within the variable bounds. numseedpoints is a bayesopt setting. if there are evaluation errors, take more random points until there are numseedpoints successful evaluations. the probability distribution of each component is either uniform or log-scaled, depending on the transform value in optimizablevariable.

then repeat the following steps:

  1. update the gaussian process model of f(x) to obtain a posterior distribution over functions q(f|xi, yi for i = 1,...,t). (internally, bayesopt uses fitrgp to fit a gaussian process model to the data.)

  2. find the new point x that maximizes the acquisition function a(x).

the algorithm stops after reaching any of the following:

  • a fixed number of iterations (default 30).

  • a fixed time (default is no time limit).

  • a stopping criterion that you supply in or .

for the algorithmic differences in parallel, see .

gaussian process regression for fitting the model

the underlying probabilistic model for the objective function f is a gaussian process prior with added gaussian noise in the observations. so the prior distribution on f(x) is a gaussian process with mean μ(x;θ) and covariance kernel function k(x,x′;θ). here, θ is a vector of kernel parameters. for the particular kernel function bayesopt uses, see kernel function.

in a bit more detail, denote a set of points x = xi with associated objective function values f = fi. the prior’s joint distribution of the function values f is multivariate normal, with mean μ(x) and covariance matrix k(x,x), where kij = k(xi,xj).

without loss of generality, the prior mean is given as 0.

also, the observations are assumed to have added gaussian noise with variance σ2. so the prior distribution has covariance k(x,x;θ) σ2i.

fitting a gaussian process regression model to observations consists of finding values for the noise variance σ2 and kernel parameters θ. this fitting is a computationally intensive process performed by fitrgp.

for details on fitting a gaussian process to observations, see .

kernel function

the kernel function k(x,x′;θ) can significantly affect the quality of a gaussian process regression. bayesopt uses the ard matérn 5/2 kernel defined in .

see snoek, larochelle, and adams [3].

acquisition function types

six choices of acquisition functions are available for bayesopt. there are three basic types, with expected-improvement also modified by per-second or plus:

  • 'expected-improvement-per-second-plus' (default)

  • 'expected-improvement'

  • 'expected-improvement-plus'

  • 'expected-improvement-per-second'

  • 'lower-confidence-bound'

  • 'probability-of-improvement'

the acquisition functions evaluate the “goodness” of a point x based on the posterior distribution function q. when there are coupled constraints, including the error constraint (see ), all acquisition functions modify their estimate of “goodness” following a suggestion of gelbart, snoek, and adams [2]. multiply the “goodness” by an estimate of the probability that the constraints are satisfied, to arrive at the acquisition function.

expected improvement

the 'expected-improvement' family of acquisition functions evaluates the expected amount of improvement in the objective function, ignoring values that cause an increase in the objective. in other words, define

  • xbest as the location of the lowest posterior mean.

  • μq(xbest) as the lowest value of the posterior mean.

then the expected improvement

ei(x,q)=eq[max(0,μq(xbest)f(x))].

probability of improvement

the 'probability-of-improvement' acquisition function makes a similar, but simpler, calculation as 'expected-improvement'. in both cases, bayesopt first calculates xbest and μq(xbest). then for 'probability-of-improvement', bayesopt calculates the probability pi that a new point x leads to a better objective function value, modified by a “margin” parameter m:

pi(x,q)=pq(f(x)<μq(xbest)m).

bayesopt takes m as the estimated noise standard deviation. bayesopt evaluates this probability as

pi=φ(νq(x)),

where

νq(x)=μq(xbest)mμq(x)σq(x).

here φ(·) is the unit normal cdf, and σq is the posterior standard deviation of the gaussian process at x.

lower confidence bound

the 'lower-confidence-bound' acquisition function looks at the curve g two standard deviations below the posterior mean at each point:

g(x)=μq(x)2σq(x).

g(x) is the 2σq lower confidence envelope of the objective function model. bayesopt then maximizes the negative of g:

lcb=2σq(x)μq(x).

per second

sometimes, the time to evaluate the objective function can depend on the region. for example, many support vector machine calculations vary in timing a good deal over certain ranges of points. if so, bayesopt can obtain better improvement per second by using time-weighting in its acquisition function. the cost-weighted acquisition functions have the phrase per-second in their names.

these acquisition functions work as follows. during the objective function evaluations, bayesopt maintains another bayesian model of objective function evaluation time as a function of position x. the expected improvement per second that the acquisition function uses is

eips(x)=eiq(x)μs(x),

where μs(x) is the posterior mean of the timing gaussian process model.

plus

to escape a local objective function minimum, the acquisition functions with plus in their names modify their behavior when they estimate that they are overexploiting an area. to understand overexploiting, let σf(x) be the standard deviation of the posterior objective function at x. let σ be the posterior standard deviation of the additive noise, so that

σq2(x) = σf2(x) σ2.

define tσ to be the value of the explorationratio option, a positive number. the bayesopt plus acquisition functions, after each iteration, evaluate whether the next point x satisfies

σf(x) < tσσ.

if so, the algorithm declares that x is overexploiting. then the acquisition function modifies its kernel function by multiplying θ by the number of iterations, as suggested by bull [1]. this modification raises the variance σq for points in between observations. it then generates a new point based on the new fitted kernel function. if the new point x is again overexploiting, the acquisition function multiplies θ by an additional factor of 10 and tries again. it continues in this way up to five times, trying to generate a point x that is not overexploiting. the algorithm accepts the new x as the next point.

explorationratio therefore controls a tradeoff between exploring new points for a better global solution, versus concentrating near points that have already been examined.

acquisition function maximization

internally, bayesopt maximizes an acquisition function using the following general steps:

  1. for algorithms starting with 'expected-improvement' and for 'probability-of-improvement', bayesopt estimates the smallest feasible mean of the posterior distribution μq(xbest) by sampling several thousand points within the variable bounds, taking several of the best (low mean value) feasible points, and improving them using local search, to find the ostensible best feasible point. feasible means that the point satisfies constraints (see ).

  2. for all algorithms, bayesopt samples several thousand points within the variable bounds, takes several of the best (high acquisition function) feasible points, and improves them using local search, to find the ostensible best feasible point. the acquisition function value depends on the modeled posterior distribution, not a sample of the objective function, and so it can be calculated quickly.

references

[1] bull, a. d. convergence rates of efficient global optimization algorithms. , 2011.

[2] gelbart, m., j. snoek, r. p. adams. bayesian optimization with unknown constraints. , 2014.

[3] snoek, j., h. larochelle, r. p. adams. practical bayesian optimization of machine learning algorithms. , 2012.

see also

|

related topics

网站地图