main content

fit a gaussian process regression (gpr) model -凯发k8网页登录

fit a gaussian process regression (gpr) model

description

example

gprmdl = fitrgp(tbl,responsevarname) returns a gaussian process regression (gpr) model trained using the sample data in tbl, where responsevarname is the name of the response variable in tbl.

gprmdl = fitrgp(tbl,formula) returns a gaussian process regression (gpr) model, trained using the sample data in tbl, for the predictor variables and response variables identified by formula.

gprmdl = fitrgp(tbl,y) returns a gpr model for the predictors in table tbl and continuous response vector y.

example

gprmdl = fitrgp(x,y) returns a gpr model for predictors x and continuous response vector y.

example

gprmdl = fitrgp(___,name,value) returns a gpr model for any of the input arguments in the previous syntaxes, with additional options specified by one or more name,value pair arguments.

for example, you can specify the fitting method, the prediction method, the covariance function, or the active set selection method. you can also train a cross-validated model.

gprmdl is a regressiongp object. for object functions and properties of this object, see regressiongp.

if you train a cross-validated model, then gprmdl is a regressionpartitionedgp object. for further analysis on the cross-validated object, use the object functions of the object.

examples

this example uses the abalone data [1], [2], from the uci machine learning repository [3] . download the data and save it in your current folder with the name abalone.data.

store the data into a table. display the first seven rows.

tbl = readtable('abalone.data','filetype','text',...
     'readvariablenames',false);
tbl.properties.variablenames = {'sex','length','diameter','height',...
     'wweight','sweight','vweight','shweight','noshellrings'};
tbl(1:7,:)
ans = 
    sex    length    diameter    height    wweight    sweight    vweight    shweight    noshellrings
    ___    ______    ________    ______    _______    _______    _______    ________    ____________
    'm'    0.455     0.365       0.095      0.514     0.2245      0.101      0.15       15          
    'm'     0.35     0.265        0.09     0.2255     0.0995     0.0485      0.07        7          
    'f'     0.53      0.42       0.135      0.677     0.2565     0.1415      0.21        9          
    'm'     0.44     0.365       0.125      0.516     0.2155      0.114     0.155       10          
    'i'     0.33     0.255        0.08      0.205     0.0895     0.0395     0.055        7          
    'i'    0.425       0.3       0.095     0.3515      0.141     0.0775      0.12        8          
    'f'     0.53     0.415        0.15     0.7775      0.237     0.1415      0.33       20

the dataset has 4177 observations. the goal is to predict the age of abalone from eight physical measurements. the last variable, number of shell rings shows the age of the abalone. the first predictor is a categorical variable. the last variable in the table is the response variable.

fit a gpr model using the subset of regressors method for parameter estimation and fully independent conditional method for prediction. standardize the predictors.

gprmdl = fitrgp(tbl,'noshellrings','kernelfunction','ardsquaredexponential',...
      'fitmethod','sr','predictmethod','fic','standardize',1)
grmdl = 
  regressiongp
       predictornames: {1x8 cell}
         responsename: 'var9'
    responsetransform: 'none'
      numobservations: 4177
       kernelfunction: 'ardsquaredexponential'
    kernelinformation: [1x1 struct]
        basisfunction: 'constant'
                 beta: 10.9148
                sigma: 2.0243
    predictorlocation: [10x1 double]
       predictorscale: [10x1 double]
                alpha: [1000x1 double]
     activesetvectors: [1000x10 double]
        predictmethod: 'fic'
        activesetsize: 1000
            fitmethod: 'sr'
      activesetmethod: 'random'
    isactivesetvector: [4177x1 logical]
        loglikelihood: -9.0013e 03
     activesethistory: [1x1 struct]
       bcdinformation: []

predict the responses using the trained model.

ypred = resubpredict(gprmdl);

plot the true response and the predicted responses.

figure();
plot(tbl.noshellrings,'r.');
hold on
plot(ypred,'b');
xlabel('x');
ylabel('y');
legend({'data','predictions'},'location','best');
axis([0 4300 0 30]);
hold off;

compute the regression loss on the training data (resubstitution loss) for the trained model.

l = resubloss(gprmdl)
l =
    4.0064

generate sample data.

rng(0,'twister'); % for reproducibility
n = 1000;
x = linspace(-10,10,n)';
y = 1   x*5e-2   sin(x)./x   0.2*randn(n,1);

fit a gpr model using a linear basis function and the exact fitting method to estimate the parameters. also use the exact prediction method.

gprmdl = fitrgp(x,y,'basis','linear',...
      'fitmethod','exact','predictmethod','exact');

predict the response corresponding to the rows of x (resubstitution predictions) using the trained model.

ypred = resubpredict(gprmdl);

plot the true response with the predicted values.

plot(x,y,'b.');
hold on;
plot(x,ypred,'r','linewidth',1.5);
xlabel('x');
ylabel('y');
legend('data','gpr predictions');
hold off

figure contains an axes object. the axes object with xlabel x, ylabel y contains 2 objects of type line. one or more of the lines displays its values using only markers these objects represent data, gpr predictions.

load the sample data.

load('gprdata2.mat')

the data has one predictor variable and continuous response. this is simulated data.

fit a gpr model using the squared exponential kernel function with default kernel parameters.

gprmdl1 = fitrgp(x,y,'kernelfunction','squaredexponential');

now, fit a second model, where you specify the initial values for the kernel parameters.

sigma0 = 0.2;
kparams0 = [3.5, 6.2];
gprmdl2 = fitrgp(x,y,'kernelfunction','squaredexponential',...
     'kernelparameters',kparams0,'sigma',sigma0);

compute the resubstitution predictions from both models.

ypred1 = resubpredict(gprmdl1);
ypred2 = resubpredict(gprmdl2);

plot the response predictions from both models and the responses in training data.

figure();
plot(x,y,'r.');
hold on
plot(x,ypred1,'b');
plot(x,ypred2,'g');
xlabel('x');
ylabel('y');
legend({'data','default kernel parameters',...
'kparams0 = [3.5,6.2], sigma0 = 0.2'},...
'location','best');
title('impact of initial kernel parameter values');
hold off

figure contains an axes object. the axes object with title impact of initial kernel parameter values, xlabel x, ylabel y contains 3 objects of type line. one or more of the lines displays its values using only markers these objects represent data, default kernel parameters, kparams0 = [3.5,6.2], sigma0 = 0.2.

the marginal log likelihood that fitrgp maximizes to estimate gpr parameters has multiple local solutions; the solution that it converges to depends on the initial point. each local solution corresponds to a particular interpretation of the data. in this example, the solution with the default initial kernel parameters corresponds to a low frequency signal with high noise whereas the second solution with custom initial kernel parameters corresponds to a high frequency signal with low noise.

load the sample data.

load('gprdata.mat')

there are six continuous predictor variables. there are 500 observations in the training data set and 100 observations in the test data set. this is simulated data.

fit a gpr model using the squared exponential kernel function with a separate length scale for each predictor. this covariance function is defined as:

k(xi,xj|θ)=σf2exp[-12m=1d(xim-xjm)2σm2].

where σm represents the length scale for predictor m, m = 1, 2, ..., d and σf is the signal standard deviation. the unconstrained parametrization θ is

θm=logσm,form=1,2,...,dθd 1=logσf.

initialize length scales of the kernel function at 10 and signal and noise standard deviations at the standard deviation of the response.

sigma0 = std(ytrain);
sigmaf0 = sigma0;
d = size(xtrain,2);
sigmam0 = 10*ones(d,1);

fit the gpr model using the initial kernel parameter values. standardize the predictors in the training data. use the exact fitting and prediction methods.

gprmdl = fitrgp(xtrain,ytrain,'basis','constant','fitmethod','exact',...
'predictmethod','exact','kernelfunction','ardsquaredexponential',...
'kernelparameters',[sigmam0;sigmaf0],'sigma',sigma0,'standardize',1);

compute the regression loss on the test data.

l = loss(gprmdl,xtest,ytest)
l = 0.6919

access the kernel information.

gprmdl.kernelinformation
ans = struct with fields:
                    name: 'ardsquaredexponential'
        kernelparameters: [7x1 double]
    kernelparameternames: {7x1 cell}

display the kernel parameter names.

gprmdl.kernelinformation.kernelparameternames
ans = 7x1 cell
    {'lengthscale1'}
    {'lengthscale2'}
    {'lengthscale3'}
    {'lengthscale4'}
    {'lengthscale5'}
    {'lengthscale6'}
    {'sigmaf'      }

display the kernel parameters.

sigmam = gprmdl.kernelinformation.kernelparameters(1:end-1,1)
sigmam = 6×1
104 ×
    0.0004
    0.0007
    0.0004
    4.7665
    0.1018
    0.0056
sigmaf = gprmdl.kernelinformation.kernelparameters(end)
sigmaf = 28.1720
sigma  = gprmdl.sigma
sigma = 0.8162

plot the log of learned length scales.

figure()
plot((1:d)',log(sigmam),'ro-');
xlabel('length scale number');
ylabel('log of length scale');

figure contains an axes object. the axes object with xlabel length scale number, ylabel log of length scale contains an object of type line.

the log of length scale for the 4th and 5th predictor variables are high relative to the others. these predictor variables do not seem to be as influential on the response as the other predictor variables.

fit the gpr model without using the 4th and 5th variables as the predictor variables.

x = [xtrain(:,1:3) xtrain(:,6)];
sigma0 = std(ytrain);
sigmaf0 = sigma0;
d = size(x,2);
sigmam0 = 10*ones(d,1);
gprmdl = fitrgp(x,ytrain,'basis','constant','fitmethod','exact',...
'predictmethod','exact','kernelfunction','ardsquaredexponential',...
'kernelparameters',[sigmam0;sigmaf0],'sigma',sigma0,'standardize',1);

compute the regression error on the test data.

xtest = [xtest(:,1:3) xtest(:,6)];
l = loss(gprmdl,xtest,ytest)
l = 0.6928

the loss is similar to the one when all variables are used as predictor variables.

compute the predicted response for the test data.

 ypred = predict(gprmdl,xtest);

plot the original response along with the fitted values.

figure;
plot(ytest,'r');
hold on;
plot(ypred,'b');
legend('true response','gpr predicted values','location','best');
hold off

figure contains an axes object. the axes object contains 2 objects of type line. these objects represent true response, gpr predicted values.

this example shows how to optimize hyperparameters automatically using fitrgp. the example uses the gprdata2 data that ships with your software.

load the data.

load('gprdata2.mat')

the data has one predictor variable and continuous response. this is simulated data.

fit a gpr model using the squared exponential kernel function with default kernel parameters.

gprmdl1 = fitrgp(x,y,'kernelfunction','squaredexponential');

find hyperparameters that minimize five-fold cross-validation loss by using automatic hyperparameter optimization.

for reproducibility, set the random seed and use the 'expected-improvement-plus' acquisition function.

rng default
gprmdl2 = fitrgp(x,y,'kernelfunction','squaredexponential',...
    'optimizehyperparameters','auto','hyperparameteroptimizationoptions',...
    struct('acquisitionfunctionname','expected-improvement-plus'));
|======================================================================================|
| iter | eval   | objective:  | objective   | bestsofar   | bestsofar   |        sigma |
|      | result | log(1 loss) | runtime     | (observed)  | (estim.)    |              |
|======================================================================================|
|    1 | best   |     0.29417 |      3.0183 |     0.29417 |     0.29417 |    0.0015045 |
|    2 | best   |    0.037898 |      1.8402 |    0.037898 |    0.060792 |      0.14147 |
|    3 | accept |      1.5693 |      1.3938 |    0.037898 |    0.040633 |       25.279 |
|    4 | accept |     0.29417 |      4.5525 |    0.037898 |    0.037984 |    0.0001091 |
|    5 | accept |     0.29393 |      2.7292 |    0.037898 |    0.038029 |     0.029932 |
|    6 | accept |     0.13152 |      1.6443 |    0.037898 |    0.038127 |      0.37127 |
|    7 | best   |    0.037785 |      1.7693 |    0.037785 |    0.037728 |      0.18116 |
|    8 | accept |     0.03783 |      1.7648 |    0.037785 |    0.036524 |      0.16251 |
|    9 | accept |    0.037833 |      1.6799 |    0.037785 |    0.036854 |      0.16159 |
|   10 | accept |    0.037835 |       1.662 |    0.037785 |    0.037052 |      0.16072 |
|   11 | accept |     0.29417 |      2.5872 |    0.037785 |     0.03705 |   0.00038214 |
|   12 | accept |     0.42256 |      1.2553 |    0.037785 |     0.03696 |       3.2067 |
|   13 | accept |     0.03786 |       1.944 |    0.037785 |    0.037087 |      0.15245 |
|   14 | accept |     0.29417 |      2.7661 |    0.037785 |    0.037043 |    0.0063584 |
|   15 | accept |     0.42302 |      2.6525 |    0.037785 |     0.03725 |       1.2221 |
|   16 | accept |    0.039486 |      1.9644 |    0.037785 |    0.037672 |      0.10069 |
|   17 | accept |    0.038591 |      3.0923 |    0.037785 |    0.037687 |      0.12077 |
|   18 | accept |    0.038513 |      2.9921 |    0.037785 |    0.037696 |       0.1227 |
|   19 | best   |    0.037757 |      2.0477 |    0.037757 |    0.037572 |      0.19621 |
|   20 | accept |    0.037787 |      2.0042 |    0.037757 |    0.037601 |      0.18068 |
|======================================================================================|
| iter | eval   | objective:  | objective   | bestsofar   | bestsofar   |        sigma |
|      | result | log(1 loss) | runtime     | (observed)  | (estim.)    |              |
|======================================================================================|
|   21 | accept |     0.44917 |      1.1509 |    0.037757 |     0.03766 |       8.7818 |
|   22 | accept |    0.040201 |      1.5921 |    0.037757 |    0.037601 |     0.075414 |
|   23 | accept |    0.040142 |      1.5503 |    0.037757 |    0.037607 |     0.087198 |
|   24 | accept |     0.29417 |      2.5034 |    0.037757 |     0.03758 |    0.0031018 |
|   25 | accept |     0.29417 |      3.1518 |    0.037757 |    0.037555 |   0.00019545 |
|   26 | accept |     0.29417 |      2.9768 |    0.037757 |    0.037582 |     0.013608 |
|   27 | accept |     0.29417 |      3.3804 |    0.037757 |    0.037556 |   0.00076147 |
|   28 | accept |     0.42162 |      1.6438 |    0.037757 |    0.037854 |       0.6791 |
|   29 | best   |    0.037704 |       2.528 |    0.037704 |    0.037908 |       0.2367 |
|   30 | accept |    0.037725 |      1.9273 |    0.037704 |    0.037881 |      0.21743 |

figure contains an axes object. the axes object with title min objective vs. number of function evaluations, xlabel function evaluations, ylabel min objective contains 2 objects of type line. these objects represent min observed objective, estimated min objective.

figure contains an axes object. the axes object with title objective function model, xlabel sigma, ylabel estimated objective function value contains 8 objects of type line. one or more of the lines displays its values using only markers these objects represent observed points, model mean, model error bars, noise error bars, next point, model minimum feasible.

__________________________________________________________
optimization completed.
maxobjectiveevaluations of 30 reached.
total function evaluations: 30
total elapsed time: 92.2108 seconds
total objective function evaluation time: 67.7649
best observed feasible point:
    sigma 
    ______
    0.2367
observed objective function value = 0.037704
estimated objective function value = 0.038223
function evaluation time = 2.528
best estimated feasible point (according to models):
     sigma 
    _______
    0.16159
estimated objective function value = 0.037881
estimated function evaluation time = 1.9918

compare the pre- and post-optimization fits.

ypred1 = resubpredict(gprmdl1);
ypred2 = resubpredict(gprmdl2);
figure();
plot(x,y,'r.');
hold on
plot(x,ypred1,'b');
plot(x,ypred2,'k','linewidth',2);
xlabel('x');
ylabel('y');
legend({'data','initial fit','optimized fit'},'location','best');
title('impact of optimization');
hold off

figure contains an axes object. the axes object with title impact of optimization, xlabel x, ylabel y contains 3 objects of type line. one or more of the lines displays its values using only markers these objects represent data, initial fit, optimized fit.

this example uses the abalone data [1], [2], from the uci machine learning repository [3]. download the data and save it in your current folder with the name abalone.data.

store the data into a table. display the first seven rows.

tbl = readtable('abalone.data','filetype','text','readvariablenames',false);
tbl.properties.variablenames = {'sex','length','diameter','height','wweight','sweight','vweight','shweight','noshellrings'};
tbl(1:7,:)
ans = 
    sex    length    diameter    height    wweight    sweight    vweight    shweight    noshellrings
    ___    ______    ________    ______    _______    _______    _______    ________    ____________
    'm'    0.455     0.365       0.095      0.514     0.2245      0.101      0.15       15          
    'm'     0.35     0.265        0.09     0.2255     0.0995     0.0485      0.07        7          
    'f'     0.53      0.42       0.135      0.677     0.2565     0.1415      0.21        9          
    'm'     0.44     0.365       0.125      0.516     0.2155      0.114     0.155       10          
    'i'     0.33     0.255        0.08      0.205     0.0895     0.0395     0.055        7          
    'i'    0.425       0.3       0.095     0.3515      0.141     0.0775      0.12        8          
    'f'     0.53     0.415        0.15     0.7775      0.237     0.1415      0.33       20

the dataset has 4177 observations. the goal is to predict the age of abalone from eight physical measurements. the last variable, number of shell rings shows the age of the abalone. the first predictor is a categorical variable. the last variable in the table is the response variable.

train a cross-validated gpr model using the 25% of the data for validation.

rng('default') % for reproducibility
cvgprmdl = fitrgp(tbl,'noshellrings','standardize',1,'holdout',0.25);

compute the average loss on folds using models trained on out-of-fold observations.

kfoldloss(cvgprmdl)
ans =
   4.6409

predict the responses for out-of-fold data.

ypred = kfoldpredict(cvgprmdl);

plot the true responses used for testing and the predictions.

figure();
plot(ypred(cvgprmdl.partition.test));
hold on;
y = table2array(tbl(:,end));
plot(y(cvgprmdl.partition.test),'r.');
axis([0 1050 0 30]);
xlabel('x')
ylabel('y')
hold off;

generate the sample data.

rng(0,'twister'); % for reproducibility
n = 1000;
x = linspace(-10,10,n)';
y = 1   x*5e-2   sin(x)./x   0.2*randn(n,1);

define the squared exponential kernel function as a custom kernel function.

you can compute the squared exponential kernel function as

k(xi,xj|θ)=σf2exp(-12(xi-xj)t(xi-xj)σl2),

where σf is the signal standard deviation, σl is the length scale. both σf and σl must be greater than zero. this condition can be enforced by the unconstrained parametrization, σl=exp(θ(1)) and σf=exp(θ(2)), for some unconstrained parametrization vector θ.

hence, you can define the squared exponential kernel function as a custom kernel function as follows:

kfcn = @(xn,xm,theta) (exp(theta(2))^2)*exp(-(pdist2(xn,xm).^2)/(2*exp(theta(1))^2));

here pdist2(xn,xm).^2 computes the distance matrix.

fit a gpr model using the custom kernel function, kfcn. specify the initial values of the kernel parameters (because you use a custom kernel function, you must provide initial values for the unconstrained parametrization vector, theta).

theta0 = [1.5,0.2];
gprmdl = fitrgp(x,y,'kernelfunction',kfcn,'kernelparameters',theta0);

fitrgp uses analytical derivatives to estimate parameters when using a built-in kernel function, whereas when using a custom kernel function it uses numerical derivatives.

compute the resubstitution loss for this model.

l = resubloss(gprmdl)
l = 0.0391

fit the gpr model using the built-in squared exponential kernel function option. specify the initial values of the kernel parameters (because you use the built-in custom kernel function and specifying initial parameter values, you must provide the initial values for the signal standard deviation and length scale(s) directly).

sigmal0 = exp(1.5);
sigmaf0 = exp(0.2);
gprmdl2 = fitrgp(x,y,'kernelfunction','squaredexponential','kernelparameters',[sigmal0,sigmaf0]);

compute the resubstitution loss for this model.

l2 = resubloss(gprmdl2)
l2 = 0.0391

the two loss values are the same as expected.

train a gpr model on generated data with many predictors. specify the initial step size for the lbfgs optimizer.

set the seed and type of the random number generator for reproducibility of the results.

rng(0,'twister'); % for reproducibility 

generate sample data with 300 observations and 3000 predictors, where the response variable depends on the 4th, 7th, and 13th predictors.

n = 300;
p = 3000;
x = rand(n,p);
y = cos(x(:,7))   sin(x(:,4).*x(:,13))   0.1*randn(n,1);

set initial values for the kernel parameters.

sigmal0 = sqrt(p)*ones(p,1); % length scale for predictors
sigmaf0 = 1; % signal standard deviation

set initial noise standard deviation to 1.

sigman0 = 1;

specify 1e-2 as the termination tolerance for the relative gradient norm.

opts = statset('fitrgp');
opts.tolfun = 1e-2;

fit a gpr model using the initial kernel parameter values, initial noise standard deviation, and an automatic relevance determination (ard) squared exponential kernel function.

specify the initial step size as 1 for determining the initial hessian approximation for an lbfgs optimizer.

gpr = fitrgp(x,y,'kernelfunction','ardsquaredexponential','verbose',1, ...
    'optimizer','lbfgs','optimizeroptions',opts, ...
    'kernelparameters',[sigmal0;sigmaf0],'sigma',sigman0,'initialstepsize',1);
o parameter estimation: fitmethod = exact, optimizer = lbfgs
 o solver = lbfgs, hessianhistorysize = 15, linesearchmethod = weakwolfe
|====================================================================================================|
|   iter   |   fun value   |  norm grad  |  norm step  |  curv  |    gamma    |    alpha    | accept |
|====================================================================================================|
|        0 |  3.004966e 02 |   2.569e 02 |   0.000e 00 |        |   3.893e-03 |   0.000e 00 |   yes  |
|        1 |  9.525779e 01 |   1.281e 02 |   1.003e 00 |    ok  |   6.913e-03 |   1.000e 00 |   yes  |
|        2 |  3.972026e 01 |   1.647e 01 |   7.639e-01 |    ok  |   4.718e-03 |   5.000e-01 |   yes  |
|        3 |  3.893873e 01 |   1.073e 01 |   1.057e-01 |    ok  |   3.243e-03 |   1.000e 00 |   yes  |
|        4 |  3.859904e 01 |   5.659e 00 |   3.282e-02 |    ok  |   3.346e-03 |   1.000e 00 |   yes  |
|        5 |  3.748912e 01 |   1.030e 01 |   1.395e-01 |    ok  |   1.460e-03 |   1.000e 00 |   yes  |
|        6 |  2.028104e 01 |   1.380e 02 |   2.010e 00 |    ok  |   2.326e-03 |   1.000e 00 |   yes  |
|        7 |  2.001849e 01 |   1.510e 01 |   9.685e-01 |    ok  |   2.344e-03 |   1.000e 00 |   yes  |
|        8 | -7.706109e 00 |   8.340e 01 |   1.125e 00 |    ok  |   5.771e-04 |   1.000e 00 |   yes  |
|        9 | -1.786074e 01 |   2.323e 02 |   2.647e 00 |    ok  |   4.217e-03 |   1.250e-01 |   yes  |
|       10 | -4.058422e 01 |   1.972e 02 |   6.796e-01 |    ok  |   7.035e-03 |   1.000e 00 |   yes  |
|       11 | -7.850209e 01 |   4.432e 01 |   8.335e-01 |    ok  |   3.099e-03 |   1.000e 00 |   yes  |
|       12 | -1.312162e 02 |   3.334e 01 |   1.277e 00 |    ok  |   5.432e-02 |   1.000e 00 |   yes  |
|       13 | -2.005064e 02 |   9.519e 01 |   2.828e 00 |    ok  |   5.292e-03 |   1.000e 00 |   yes  |
|       14 | -2.070150e 02 |   1.898e 01 |   1.641e 00 |    ok  |   6.817e-03 |   1.000e 00 |   yes  |
|       15 | -2.108086e 02 |   3.793e 01 |   7.685e-01 |    ok  |   3.479e-03 |   1.000e 00 |   yes  |
|       16 | -2.122920e 02 |   7.057e 00 |   1.591e-01 |    ok  |   2.055e-03 |   1.000e 00 |   yes  |
|       17 | -2.125610e 02 |   4.337e 00 |   4.818e-02 |    ok  |   1.974e-03 |   1.000e 00 |   yes  |
|       18 | -2.130162e 02 |   1.178e 01 |   8.891e-02 |    ok  |   2.856e-03 |   1.000e 00 |   yes  |
|       19 | -2.139378e 02 |   1.933e 01 |   2.371e-01 |    ok  |   1.029e-02 |   1.000e 00 |   yes  |
|====================================================================================================|
|   iter   |   fun value   |  norm grad  |  norm step  |  curv  |    gamma    |    alpha    | accept |
|====================================================================================================|
|       20 | -2.151111e 02 |   1.550e 01 |   3.015e-01 |    ok  |   2.765e-02 |   1.000e 00 |   yes  |
|       21 | -2.173046e 02 |   5.856e 00 |   6.537e-01 |    ok  |   1.414e-02 |   1.000e 00 |   yes  |
|       22 | -2.201781e 02 |   8.918e 00 |   8.484e-01 |    ok  |   6.381e-03 |   1.000e 00 |   yes  |
|       23 | -2.288858e 02 |   4.846e 01 |   2.311e 00 |    ok  |   2.661e-03 |   1.000e 00 |   yes  |
|       24 | -2.392171e 02 |   1.190e 02 |   6.283e 00 |    ok  |   8.113e-03 |   1.000e 00 |   yes  |
|       25 | -2.511145e 02 |   1.008e 02 |   1.198e 00 |    ok  |   1.605e-02 |   1.000e 00 |   yes  |
|       26 | -2.742547e 02 |   2.207e 01 |   1.231e 00 |    ok  |   3.191e-03 |   1.000e 00 |   yes  |
|       27 | -2.849931e 02 |   5.067e 01 |   3.660e 00 |    ok  |   5.184e-03 |   1.000e 00 |   yes  |
|       28 | -2.899797e 02 |   2.068e 01 |   1.162e 00 |    ok  |   6.270e-03 |   1.000e 00 |   yes  |
|       29 | -2.916723e 02 |   1.816e 01 |   3.213e-01 |    ok  |   1.415e-02 |   1.000e 00 |   yes  |
|       30 | -2.947674e 02 |   6.965e 00 |   1.126e 00 |    ok  |   6.339e-03 |   1.000e 00 |   yes  |
|       31 | -2.962491e 02 |   1.349e 01 |   2.352e-01 |    ok  |   8.999e-03 |   1.000e 00 |   yes  |
|       32 | -3.004921e 02 |   1.586e 01 |   9.880e-01 |    ok  |   3.940e-02 |   1.000e 00 |   yes  |
|       33 | -3.118906e 02 |   1.889e 01 |   3.318e 00 |    ok  |   1.213e-01 |   1.000e 00 |   yes  |
|       34 | -3.189215e 02 |   7.086e 01 |   3.070e 00 |    ok  |   8.095e-03 |   1.000e 00 |   yes  |
|       35 | -3.245557e 02 |   4.366e 00 |   1.397e 00 |    ok  |   2.718e-03 |   1.000e 00 |   yes  |
|       36 | -3.254613e 02 |   3.751e 00 |   6.546e-01 |    ok  |   1.004e-02 |   1.000e 00 |   yes  |
|       37 | -3.262823e 02 |   4.011e 00 |   2.026e-01 |    ok  |   2.441e-02 |   1.000e 00 |   yes  |
|       38 | -3.325606e 02 |   1.773e 01 |   2.427e 00 |    ok  |   5.234e-02 |   1.000e 00 |   yes  |
|       39 | -3.350374e 02 |   1.201e 01 |   1.603e 00 |    ok  |   2.674e-02 |   1.000e 00 |   yes  |
|====================================================================================================|
|   iter   |   fun value   |  norm grad  |  norm step  |  curv  |    gamma    |    alpha    | accept |
|====================================================================================================|
|       40 | -3.379112e 02 |   5.280e 00 |   1.393e 00 |    ok  |   1.177e-02 |   1.000e 00 |   yes  |
|       41 | -3.389136e 02 |   3.061e 00 |   7.121e-01 |    ok  |   2.935e-02 |   1.000e 00 |   yes  |
|       42 | -3.401070e 02 |   4.094e 00 |   6.224e-01 |    ok  |   3.399e-02 |   1.000e 00 |   yes  |
|       43 | -3.436291e 02 |   8.833e 00 |   1.707e 00 |    ok  |   5.231e-02 |   1.000e 00 |   yes  |
|       44 | -3.456295e 02 |   5.891e 00 |   1.424e 00 |    ok  |   3.772e-02 |   1.000e 00 |   yes  |
|       45 | -3.460069e 02 |   1.126e 01 |   2.580e 00 |    ok  |   3.907e-02 |   1.000e 00 |   yes  |
|       46 | -3.481756e 02 |   1.546e 00 |   8.142e-01 |    ok  |   1.565e-02 |   1.000e 00 |   yes  |
         infinity norm of the final gradient = 1.546e 00
              two norm of the final step     = 8.142e-01, tolx   = 1.000e-12
relative infinity norm of the final gradient = 6.016e-03, tolfun = 1.000e-02
exit: local minimum found.
o alpha estimation: predictmethod = exact

because the gpr model uses an ard kernel with many predictors, using an lbfgs approximation to the hessian is more memory efficient than storing the full hessian matrix. also, using the initial step size to determine the initial hessian approximation may help speed up optimization.

find the predictor weights by taking the exponential of the negative learned length scales. normalize the weights.

sigmal = gpr.kernelinformation.kernelparameters(1:end-1); % learned length scales
weights = exp(-sigmal); % predictor weights
weights = weights/sum(weights); % normalized predictor weights

plot the normalized predictor weights.

figure;
semilogx(weights,'ro');
xlabel('predictor index');
ylabel('predictor weight');

the trained gpr model assigns the largest weights to the 4th, 7th, and 13th predictors. the irrelevant predictors have weights close to zero.

input arguments

sample data used to train the model, specified as a table. each row of tbl corresponds to one observation, and each column corresponds to one variable. tbl contains the predictor variables, and optionally it can also contain one column for the response variable. multicolumn variables and cell arrays other than cell arrays of character vectors are not allowed.

  • if tbl contains the response variable, and you want to use all the remaining variables as predictors, then specify the response variable using responsevarname.

  • if tbl contains the response variable, and you want to use only a subset of the predictors in training the model, then specify the response variable and the predictor variables using formula.

  • if tbl does not contain the response variable, then specify a response variable using y. the length of the response variable and the number of rows in tbl must be equal.

for more information on the table data type, see .

if your predictor data contains categorical variables, then fitrgp creates dummy variables. for details, see categoricalpredictors.

data types: table

response variable name, specified as the name of a variable in tbl. you must specify responsevarname as a character vector or string scalar. for example, if the response variable y is stored in tbl (as tbl.y), then specify it as 'y'. otherwise, the software treats all the columns of tbl, including y, as predictors when training the model.

data types: char | string

response and predictor variables to use in model training, specified as a character vector or string scalar in the form of 'y~x1 x2 x3'. in this form, y represents the response variable; x1, x2, x3 represent the predictor variables to use in training the model.

use a formula if you want to specify a subset of variables in tbl as predictors to use when training the model. if you specify a formula, then any variables that do not appear in formula are not used to train the model.

the variable names in the formula must be both variable names in tbl (tbl.properties.variablenames) and valid matlab® identifiers. you can verify the variable names in tbl by using the isvarname function. if the variable names are not valid, then you can convert them by using the matlab.lang.makevalidname function.

the formula does not indicate the form of the basisfunction.

example: 'petallength~petalwidth species' identifies the variable petallength as the response variable, and petalwidth and species as the predictor variables.

data types: char | string

predictor data for the gpr model, specified as an n-by-d matrix. n is the number of observations (rows), and d is the number of predictors (columns).

the length of y and the number of rows of x must be equal.

to specify the names of the predictors in the order of their appearance in x, use the predictornames name-value pair argument.

data types: double

response data for the gpr model, specified as an n-by-1 vector. you can omit y if you provide the tbl training data that also includes y. in that case, use responsevarname to identify the response variable or use formula to identify the response and predictor variables.

data types: double

name-value arguments

specify optional pairs of arguments as name1=value1,...,namen=valuen, where name is the argument name and value is the corresponding value. name-value arguments must appear after other arguments, but the order of the pairs does not matter.

before r2021a, use commas to separate each name and value, and enclose name in quotes.

example: 'fitmethod','sr','basisfunction','linear','activesetmethod','sgma','predictmethod','fic' trains the gpr model using the subset of regressors approximation method for parameter estimation, uses a linear basis function, uses sparse greedy matrix approximation for active selection, and fully independent conditional approximation method for prediction.

note

you cannot use any cross-validation name-value argument together with the 'optimizehyperparameters' name-value argument. you can modify the cross-validation for 'optimizehyperparameters' only by using the 'hyperparameteroptimizationoptions' name-value argument.

fitting

method to estimate parameters of the gpr model, specified as one of the following.

fit methoddescription
'none'no estimation, use the initial parameter values as the known parameter values.
'exact'exact gaussian process regression. default if n ≤ 2000, where n is the number of observations.
'sd'subset of data points approximation. default if n > 2000, where n is the number of observations.
'sr'subset of regressors approximation.
'fic'fully independent conditional approximation.

example: 'fitmethod','fic'

explicit basis in the gpr model, specified as one of the following. if n is the number of observations, the basis function adds the term h*β to the model, where h is the basis matrix and β is a p-by-1 vector of basis coefficients.

explicit basisbasis matrix
'none'empty matrix.
'constant'

h=1

(n-by-1 vector of 1s, where n is the number of observations)

'linear'

h=[1,x]

'purequadratic'

h=[1,x,x2],

where

x2=[x112x122x1d2x212x222x2d2xn12xn22xnd2].

function handle

function handle, hfcn, that fitrgp calls as:

h=hfcn(x),

where x is an n-by-d matrix of predictors and h is an n-by-p matrix of basis functions.

example: 'basisfunction','purequadratic'

data types: char | string | function_handle

initial value of the coefficients for the explicit basis, specified as a p-by-1 vector, where p is the number of columns in the basis matrix h.

the basis matrix depends on the choice of the explicit basis function as follows (also see basisfunction).

fitrgp uses the coefficient initial values as the known coefficient values, only when fitmethod is 'none'.

data types: double

initial value for the noise standard deviation of the gaussian process model, specified as a positive scalar value.

fitrgp parameterizes the noise standard deviation as the sum of sigmalowerbound and exp(η), where η is an unconstrained value. therefore, sigma must be larger than sigmalowerbound by a small tolerance so that the function can initialize η to a finite value. otherwise, the function resets sigma to a compatible value.

the tolerance is 1e-3 when constantsigma is false (default) and 1e-6 otherwise. if the tolerance is not small enough relative to the scale of the response variable, you can scale up the response variable so that the tolerance value can be considered small for the response variable.

example: 'sigma',2

data types: double

constant value of sigma for the noise standard deviation of the gaussian process model, specified as a logical scalar. when constantsigma is true, fitrgp does not optimize the value of sigma, but instead takes the initial value as the value throughout its computations.

example: 'constantsigma',true

data types: logical

lower bound on the noise standard deviation (sigma), specified as a positive scalar value.

sigma must be larger than sigmalowerbound by a small tolerance. for details, see sigma.

example: 'sigmalowerbound',0.02

data types: double

categorical predictors list, specified as one of the values in this table.

valuedescription
vector of positive integers

each entry in the vector is an index value indicating that the corresponding predictor is categorical. the index values are between 1 and p, where p is the number of predictors used to train the model.

if fitrgp uses a subset of input variables as predictors, then the function indexes the predictors using only the subset. the categoricalpredictors values do not count the response variable, observation weights variable, or any other variables that the function does not use.

logical vector

a true entry means that the corresponding predictor is categorical. the length of the vector is p.

character matrixeach row of the matrix is the name of a predictor variable. the names must match the entries in predictornames. pad the names with extra blanks so each row of the character matrix has the same length.
string array or cell array of character vectorseach element in the array is the name of a predictor variable. the names must match the entries in predictornames.
"all"all predictors are categorical.

by default, if the predictor data is in a table (tbl), fitrgp assumes that a variable is categorical if it is a logical vector, categorical vector, character array, string array, or cell array of character vectors. if the predictor data is a matrix (x), fitrgp assumes that all predictors are continuous. to identify any other predictors as categorical predictors, specify them by using the categoricalpredictors name-value argument.

for the identified categorical predictors, fitrgp creates dummy variables using two different schemes, depending on whether a categorical variable is unordered or ordered. for an unordered categorical variable, fitrgp creates one dummy variable for each level of the categorical variable. for an ordered categorical variable, fitrgp creates one less dummy variable than the number of categories. for details, see automatic creation of dummy variables.

example: 'categoricalpredictors','all'

data types: single | double | logical | char | string | cell

indicator to standardize data, specified as a logical value.

if you set 'standardize',1, then the software centers and scales each column of the predictor data, by the column mean and standard deviation, respectively. the software does not standardize the data contained in the dummy variable columns that it generates for categorical predictors.

example: 'standardize',1

example: 'standardize',true

data types: logical

regularization standard deviation for sparse methods subset of regressors ('sr') and fully independent conditional ('fic'), specified as a positive scalar value.

example: 'regularization',0.2

data types: double

method for computing the log likelihood and gradient for parameter estimation using subset of regressors ('sr') and fully independent conditional ('fic') approximation methods, specified as one of the following.

  • 'qr' — use qr factorization based approach, this option provides better accuracy.

  • 'v' — use v-method-based approach. this option provides faster computation of log likelihood gradients.

example: 'computationmethod','v'

kernel (covariance) function

form of the covariance function, specified as one of the following.

functiondescription
'exponential'exponential kernel.
'squaredexponential'squared exponential kernel.
'matern32'matern kernel with parameter 3/2.
'matern52'matern kernel with parameter 5/2.
'rationalquadratic'rational quadratic kernel.
'ardexponential'exponential kernel with a separate length scale per predictor.
'ardsquaredexponential'squared exponential kernel with a separate length scale per predictor.
'ardmatern32'matern kernel with parameter 3/2 and a separate length scale per predictor.
'ardmatern52'matern kernel with parameter 5/2 and a separate length scale per predictor.
'ardrationalquadratic'rational quadratic kernel with a separate length scale per predictor.
function handlea function handle that can be called like this:
kmn = kfcn(xm,xn,theta)
where xm is an m-by-d matrix, xn is an n-by-d matrix and kmn is an m-by-n matrix of kernel products such that kmn(i,j) is the kernel product between xm(i,:) and xn(j,:).
theta is the r-by-1 unconstrained parameter vector for kfcn.

for more information on the kernel functions, see .

example: 'kernelfunction','matern32'

data types: char | string | function_handle

initial values for the kernel parameters, specified as a vector. the size of the vector and the values depend on the form of the covariance function, specified by the kernelfunction name-value pair argument.

'kernelfunction''kernelparameters'
'exponential', 'squaredexponential', 'matern32', or 'matern52'2-by-1 vector phi, where phi(1) contains the length scale and phi(2) contains the signal standard deviation.
default initial value of the length scale parameter is the mean of standard deviations of the predictors, and the signal standard deviation is the standard deviation of the responses divided by square root of 2. that is,
phi = [mean(std(x));std(y)/sqrt(2)]
'rationalquadratic'3-by-1 vector phi, where phi(1) contains the length scale, phi(2) contains the scale-mixture parameter, and phi(3) contains the signal standard deviation.
default initial value of the length scale parameter is the mean of standard deviations of the predictors, and the signal standard deviation is the standard deviation of the responses divided by square root of 2. default initial value for the scale-mixture parameter is 1. that is,
phi = [mean(std(x));1;std(y)/sqrt(2)]
'ardexponential', 'ardsquaredexponential', 'ardmatern32', or 'ardmatern52'(d 1)-by-1 vector phi, where phi(i) contains the length scale for predictor i, and phi(d 1) contains the signal standard deviation. d is the number of predictor variables after dummy variables are created for categorical variables. for details about creating dummy variables, see categoricalpredictors.
default initial value of the length scale parameters are the standard deviations of the predictors and the signal standard deviation is the standard deviation of the responses divided by square root of 2. that is,
phi = [std(x)';std(y)/sqrt(2)]
'ardrationalquadratic'(d 2)-by-1 vector phi, where phi(i) contains the length scale for predictor i, phi(d 1) contains the scale-mixture parameter, and phi(d 2) contains signal standard deviation.
default initial value of the length scale parameters are the standard deviations of the predictors and the signal standard deviation is the standard deviation of the responses divided by square root of 2. default initial value for the scale-mixture parameter is 1. that is,
phi = [std(x)';1;std(y)/sqrt(2)]
function handler-by-1 vector as the initial value of the unconstrained parameter vector phi for the custom kernel function kfcn.
when kernelfunction is a function handle, you must supply initial values for the kernel parameters.

for more information on the kernel functions, see .

example: 'kernelparameters',theta

data types: double

method for computing inter-point distances to evaluate built-in kernel functions, specified as either 'fast' or 'accurate'. fitrgp computes (xy)2 as x2 y22*x*y when you choose the fast option and as (xy)2 when you choose the accurate option.

example: 'distancemethod','accurate'

active set selection

observations in the active set, specified as an m-by-1 vector of integers ranging from 1 to n (mn) or a logical vector of length n with at least one true element. n is the total number of observations in the training data.

fitrgp uses the observations indicated by activeset to train the gpr model. the active set cannot have duplicate elements.

if you supply activeset, then:

data types: double | logical

size of the active set for sparse methods ('sd', 'sr', 'fic'), specified as an integer m, 1 ≤ mn, where n is the number of observations.

default is min(1000,n) when fitmethod is 'sr' or 'fic', and min(2000,n), otherwise.

example: 'activesetsize',100

data types: double

active set selection method, specified as one of the following.

methoddescription
'random'random selection
'sgma'sparse greedy matrix approximation
'entropy'differential entropy-based selection
'likelihood'subset of regressors log likelihood-based selection

all active set selection methods (except 'random') require the storage of an n-by-m matrix, where m is the size of the active set and n is the number of observations.

example: 'activesetmethod','entropy'

random search set size per greedy inclusion for active set selection, specified as an integer value.

example: 'randomsearchsetsize',30

data types: double

relative tolerance for terminating active set selection, specified as a positive scalar value.

example: 'toleranceactiveset',0.0002

data types: double

number of repetitions for interleaved active set selection and parameter estimation when activesetmethod is not 'random', specified as an integer value.

example: 'numactivesetrepeats',5

data types: double

prediction

method used to make predictions from a gaussian process model given the parameters, specified as one of the following.

methoddescription
'exact'exact gaussian process regression method. default, if n ≤ 10000.
'bcd'block coordinate descent. default, if n > 10000.
'sd'subset of data points approximation.
'sr'subset of regressors approximation.
'fic'fully independent conditional approximation.

example: 'predictmethod','bcd'

block size for block coordinate descent method ('bcd'), specified as an integer in the range from 1 to n, where n is the number of observations.

example: 'blocksizebcd',1500

data types: double

number of greedy selections for block coordinate descent method ('bcd'), specified as an integer in the range from 1 to blocksizebcd.

example: 'numgreedybcd',150

data types: double

relative tolerance on gradient norm for terminating block coordinate descent method ('bcd') iterations, specified as a positive scalar.

example: 'tolerancebcd',0.002

data types: double

absolute tolerance on step size for terminating block coordinate descent method ('bcd') iterations, specified as a positive scalar.

example: 'steptolerancebcd',0.002

data types: double

maximum number of block coordinate descent method ('bcd') iterations, specified as an integer value.

example: 'iterationlimitbcd',10000

data types: double

optimization

optimizer to use for parameter estimation, specified as one of the values in this table.

valuedescription
'quasinewton'dense, symmetric rank-1-based, quasi-newton approximation to the hessian
'lbfgs'lbfgs-based quasi-newton approximation to the hessian
'fminsearch'unconstrained nonlinear optimization using the simplex search method of lagarias et al. [5]
'fminunc'unconstrained nonlinear optimization (requires an optimization toolbox™ license)
'fmincon'constrained nonlinear optimization (requires an optimization toolbox license)

for more information on the optimizers, see algorithms.

example: 'optimizer','fmincon'

options for the optimizer you choose using the optimizer name-value pair argument, specified as a structure or object created by optimset, statset('fitrgp'), or optimoptions.

optimizercreate optimizer options using
'fminsearch'optimset (structure)
'quasinewton' or 'lbfgs'statset('fitrgp') (structure)
'fminunc' or 'fmincon'optimoptions (object)

the default options depend on the type of optimizer.

example: 'optimizeroptions',opt

initial step size, specified as a real positive scalar or 'auto'.

'initialstepsize' is the approximate maximum absolute value of the first optimization step when the optimizer is 'quasinewton' or 'lbfgs'. the initial step size can determine the initial hessian approximation during optimization.

by default, fitrgp does not use the initial step size to determine the initial hessian approximation. to use the initial step size, set a value for the 'initialstepsize' name-value pair argument, or specify 'initialstepsize','auto' to have fitrgp determine a value automatically. for more information on 'auto', see algorithms.

example: 'initialstepsize','auto'

cross-validation

indicator for cross-validation, specified as either 'off' or 'on'. if it is 'on', then fitrgp returns a gpr model cross-validated with 10 folds.

you can use one of the kfold, holdout, leaveout or cvpartition name-value pair arguments to change the default cross-validation settings. you can use only one of these name-value pairs at a time.

as an alternative, you can use the crossval method for your model.

example: 'crossval','on'

random partition for a stratified k-fold cross-validation, specified as a cvpartition object.

example: 'cvpartition',cvp uses the random partition defined by cvp.

if you specify cvpartition, then you cannot specify holdout, kfold, or leaveout.

fraction of the data to use for testing in holdout validation, specified as a scalar value in the range from 0 to 1. if you specify 'holdout',p, then the software:
1. randomly reserves around p*100% of the data as validation data, and trains the model using the rest of the data
2. stores the compact, trained model in cvgprmdl.trained.

example: 'holdout', 0.3 uses 30% of the data for testing and 70% of the data for training.

if you specify holdout, then you cannot specify cvpartition, kfold, or leaveout.

data types: double

number of folds to use in cross-validated gpr model, specified as a positive integer value. kfold must be greater than 1. if you specify 'kfold',k then the software:
1. randomly partitions the data into k sets.
2. for each set, reserves the set as test data, and trains the model using the other k – 1 sets.
3. stores the k compact, trained models in the cells of a k-by-1 cell array in cvgprmdl.trained.

example: 'kfold',5 uses 5 folds in cross-validation. that is, for each fold, uses that fold as test data, and trains the model on the remaining 4 folds.

if you specify kfold, then you cannot specify cvpartition, holdout, or leaveout.

data types: double

indicator for leave-one-out cross-validation, specified as either 'off' or 'on'.

if you specify 'leaveout','on', then, for each of the n observations, the software:
1. reserves the observation as test data, and trains the model using the other n – 1 observations.
2. stores the compact, trained model in a cell in the n-by-1 cell array cvgprmdl.trained.

example: 'leaveout','on'

if you specify leaveout, then you cannot specify cvpartition, holdout, or kfold.

hyperparameter optimization

parameters to optimize, specified as one of the following:

  • 'none' — do not optimize.

  • 'auto' — use {'sigma'}.

  • 'all' — optimize all eligible parameters, equivalent to{'basisfunction','kernelfunction','kernelscale','sigma','standardize'}.

  • string array or cell array of eligible parameter names.

  • vector of optimizablevariable objects, typically the output of .

the optimization attempts to minimize the cross-validation loss (error) for fitrgp by varying the parameters. to control the cross-validation type and other aspects of the optimization, use the hyperparameteroptimizationoptions name-value pair.

note

the values of 'optimizehyperparameters' override any values you specify using other name-value arguments. for example, setting 'optimizehyperparameters' to 'auto' causes fitrgp to optimize hyperparameters corresponding to the 'auto' option and to ignore any specified values for the hyperparameters.

the eligible parameters for fitrgp are:

  • basisfunctionfitrgp searches among 'constant', 'none', 'linear', and 'purequadratic'.

  • kernelfunctionfitrgp searches among 'ardexponential', 'ardmatern32', 'ardmatern52', 'ardrationalquadratic', 'ardsquaredexponential', 'exponential', 'matern32', 'matern52', 'rationalquadratic', and 'squaredexponential'.

  • kernelscalefitrgp uses the kernelparameters argument to specify the value of the kernel scale parameter, which is held constant during fitting. in this case, all input dimensions are constrained to have the same kernelscale value. fitrgp searches among real value in the range [1e-3*maxpredictorrange,maxpredictorrange], where

    maxpredictorrange = max(max(x) - min(x)).

    kernelscale cannot be optimized for any of the ard kernels.

  • sigmafitrgp searches among real value in the range [1e-4, max(1e-3,10*responsestd)], where

    responsestd = std(y).

    internally, fitrgp sets the constantsigma name-value pair to true so the value of sigma is constant during the fitting.

  • standardizefitrgp searches among true and false.

set nondefault parameters by passing a vector of optimizablevariable objects that have nondefault values. for example,

load fisheriris
params = hyperparameters('fitrgp',meas,species);
params(1).range = [1e-4,1e6];

pass params as the value of optimizehyperparameters.

by default, the iterative display appears at the command line, and plots appear according to the number of hyperparameters in the optimization. for the optimization and plots, the objective function is log(1   cross-validation loss). to control the iterative display, set the verbose field of the 'hyperparameteroptimizationoptions' name-value argument. to control the plots, set the showplots field of the 'hyperparameteroptimizationoptions' name-value argument.

for an example, see optimize gpr regression.

example: 'auto'

options for optimization, specified as a structure. this argument modifies the effect of the optimizehyperparameters name-value argument. all fields in the structure are optional.

field namevaluesdefault
optimizer
  • 'bayesopt' — use bayesian optimization. internally, this setting calls .

  • 'gridsearch' — use grid search with numgriddivisions values per dimension.

  • 'randomsearch' — search at random among maxobjectiveevaluations points.

'gridsearch' searches in a random order, using uniform sampling without replacement from the grid. after optimization, you can get a table in grid order by using the command sortrows(mdl.hyperparameteroptimizationresults).

'bayesopt'
acquisitionfunctionname
  • 'expected-improvement-per-second-plus'

  • 'expected-improvement'

  • 'expected-improvement-plus'

  • 'expected-improvement-per-second'

  • 'lower-confidence-bound'

  • 'probability-of-improvement'

acquisition functions whose names include per-second do not yield reproducible results because the optimization depends on the runtime of the objective function. acquisition functions whose names include plus modify their behavior when they are overexploiting an area. for more details, see acquisition function types.

'expected-improvement-per-second-plus'
maxobjectiveevaluationsmaximum number of objective function evaluations.30 for 'bayesopt' and 'randomsearch', and the entire grid for 'gridsearch'
maxtime

time limit, specified as a positive real scalar. the time limit is in seconds, as measured by tic and toc. the run time can exceed maxtime because maxtime does not interrupt function evaluations.

inf
numgriddivisionsfor 'gridsearch', the number of values in each dimension. the value can be a vector of positive integers giving the number of values for each dimension, or a scalar that applies to all dimensions. this field is ignored for categorical variables.10
showplotslogical value indicating whether to show plots. if true, this field plots the best observed objective function value against the iteration number. if you use bayesian optimization (optimizer is 'bayesopt'), then this field also plots the best estimated objective function value. the best observed objective function values and best estimated objective function values correspond to the values in the bestsofar (observed) and bestsofar (estim.) columns of the iterative display, respectively. you can find these values in the properties objectiveminimumtrace and estimatedobjectiveminimumtrace of mdl.hyperparameteroptimizationresults. if the problem includes one or two optimization parameters for bayesian optimization, then showplots also plots a model of the objective function against the parameters.true
saveintermediateresultslogical value indicating whether to save results when optimizer is 'bayesopt'. if true, this field overwrites a workspace variable named 'bayesoptresults' at each iteration. the variable is a bayesianoptimization object.false
verbose

display at the command line:

  • 0 — no iterative display

  • 1 — iterative display

  • 2 — iterative display with extra information

for details, see the bayesopt name-value argument and the example .

1
useparallellogical value indicating whether to run bayesian optimization in parallel, which requires parallel computing toolbox™. due to the nonreproducibility of parallel timing, parallel bayesian optimization does not necessarily yield reproducible results. for details, see .false
repartition

logical value indicating whether to repartition the cross-validation at every iteration. if this field is false, the optimizer uses a single partition for the optimization.

the setting true usually gives the most robust results because it takes partitioning noise into account. however, for good results, true requires at least twice as many function evaluations.

false
use no more than one of the following three options.
cvpartitiona cvpartition object, as created by cvpartition'kfold',5 if you do not specify a cross-validation field
holdouta scalar in the range (0,1) representing the holdout fraction
kfoldan integer greater than 1

example: 'hyperparameteroptimizationoptions',struct('maxobjectiveevaluations',60)

data types: struct

other

predictor variable names, specified as a string array of unique names or a cell array of unique character vectors. the functionality of 'predictornames' depends on the way you supply the training data.

  • if you supply x and y, then you can use 'predictornames' to give the predictor variables in x names.

    • the order of the names in predictornames must correspond to the column order of x. that is, predictornames{1} is the name of x(:,1), predictornames{2} is the name of x(:,2), and so on. also, size(x,2) and numel(predictornames) must be equal.

    • by default, predictornames is {'x1','x2',...}.

  • if you supply tbl, then you can use 'predictornames' to choose which predictor variables to use in training. that is, fitrgp uses the predictor variables in predictornames and the response only in training.

    • predictornames must be a subset of tbl.properties.variablenames and cannot include the name of the response variable.

    • by default, predictornames contains the names of all predictor variables.

    • it good practice to specify the predictors for training using one of 'predictornames' or formula only.

example: 'predictornames',{'pedallength','pedalwidth'}

data types: string | cell

response variable name, specified as a character vector or string scalar.

  • if you supply y, then you can use responsename to specify a name for the response variable.

  • if you supply responsevarname or formula, then you cannot use responsename.

example: "responsename","response"

data types: char | string

verbosity level, specified as one of the following.

  • 0 — fitrgp suppresses diagnostic messages related to active set selection and block coordinate descent but displays the messages related to parameter estimation, depending on the value of 'display' in optimizeroptions.

  • 1 — fitrgp displays the iterative diagnostic messages related to parameter estimation, active set selection, and block coordinate descent.

example: 'verbose',1

cache size in megabytes (mb), specified as a positive scalar. cache size is the extra memory that is available in addition to that required for fitting and active set selection. fitrgp uses cachesize to:

  • decide whether interpoint distances should be cached when estimating parameters.

  • decide how matrix vector products should be computed for block coordinate descent method and for making predictions.

example: 'cachesize',2000

data types: double

output arguments

gaussian process regression model, returned as a regressiongp or object.

  • if you cross-validate, that is, if you use one of the 'crossval', 'kfold', 'holdout', 'leaveout', or 'cvpartition' name-value arguments, then gprmdl is a regressionpartitionedgp object. you can use to predict responses for observations that fitrgp holds out during training. kfoldpredict predicts a response for every observation by using the model trained without that observation. you cannot compute the prediction intervals for a cross-validated model.

  • if you do not cross-validate, then gprmdl is a regressiongp object. you can use to predict responses for new observations, and use to predict responses for training observations. you can also compute the prediction intervals by using predict and resubpredict.

more about

active set selection and parameter estimation

for subset of data, subset of regressors, or fully independent conditional approximation fitting methods (fitmethod equal to 'sd', 'sr', or 'fic'), if you do not provide the active set (or inducing input set), fitrgp selects the active set and computes the parameter estimates in a series of iterations.

in the first iteration, the software uses the initial parameter values in vector η0 = [β0,σ0,θ0] to select an active set a1. it maximizes the gpr marginal log likelihood or its approximation using η0 as the initial values and a1 to compute the new parameter estimates η1. next, it computes the new log likelihood l1 using η1 and a1.

in the second iteration, the software selects the active set a2 using the parameter values in η1. then, using η1 as the initial values and a2, it maximizes the gpr marginal log likelihood or its approximation and estimates the new parameter values η2. then using η2 and a2, computes the new log likelihood value l2.

the following table summarizes the iterations and what is computed at each iteration.

iteration numberactive setparameter vectorlog likelihood
1a1η1l1
2a2η2l2
3a3η3l3

the software iterates similarly for a specified number of repetitions. you can specify the number of replications for active set selection using the numactivesetrepeats name-value pair argument.

tips

  • fitrgp accepts any combination of fitting, prediction, and active set selection methods. in some cases it might not be possible to compute the standard deviations of the predicted responses, hence the prediction intervals. see . and in some cases, using the exact method might be expensive due to the size of the training data.

  • the predictornames property stores one element for each of the original predictor variable names. for example, if there are three predictors, one of which is a categorical variable with three levels, predictornames is a 1-by-3 cell array of character vectors.

  • the expandedpredictornames property stores one element for each of the predictor variables, including the dummy variables. for example, if there are three predictors, one of which is a categorical variable with three levels, then expandedpredictornames is a 1-by-5 cell array of character vectors.

  • similarly, the beta property stores one beta coefficient for each predictor, including the dummy variables.

  • the x property stores the training data as originally input. it does not include the dummy variables.

  • the default approach to initializing the hessian approximation in fitrgp can be slow when you have a gpr model with many kernel parameters, such as when using an ard kernel with many predictors. in this case, consider specifying 'auto' or a value for the initial step size.

    you can set 'verbose',1 for display of iterative diagnostic messages, and begin training a gpr model using an lbfgs or quasi-newton optimizer with the default fitrgp optimization. if the iterative diagnostic messages are not displayed after a few seconds, it is possible that initialization of the hessian approximation is taking too long. in this case, consider restarting training and using the initial step size to speed up optimization.

  • after training a model, you can generate c/c code that predicts responses for new data. generating c/c code requires matlab coder™. for details, see introduction to code generation..

algorithms

  • fitting a gpr model involves estimating the following model parameters from the data:

    • covariance function k(xi,xj|θ) parameterized in terms of kernel parameters in vector θ (see )

    • noise variance, σ2

    • coefficient vector of fixed basis functions, β

    the value of the 'kernelparameters' name-value pair argument is a vector that consists of initial values for the signal standard deviation σf and the characteristic length scales σl. the fitrgp function uses these values to determine the kernel parameters. similarly, the 'sigma' name-value pair argument contains the initial value for the noise standard deviation σ.

  • during optimization, fitrgp creates a vector of unconstrained initial parameter values η0 by using the initial values for the noise standard deviation and the kernel parameters.

  • fitrgp analytically determines the explicit basis coefficients β, specified by the 'beta' name-value pair argument, from estimated values of θ and σ2. therefore, β does not appear in the η0 vector when fitrgp initializes numerical optimization.

    note

    if you specify no estimation of parameters for the gpr model, fitrgp uses the value of the 'beta' name-value pair argument and other initial parameter values as the known gpr parameter values (see beta). in all other cases, the value of the 'beta' argument is optimized analytically from the objective function.

  • the quasi-newton optimizer uses a trust-region method with a dense, symmetric rank-1-based (sr1), quasi-newton approximation to the hessian, while the lbfgs optimizer uses a standard line-search method with a limited-memory broyden-fletcher-goldfarb-shanno (lbfgs) quasi-newton approximation to the hessian. see nocedal and wright [6].

  • if you set the 'initialstepsize' name-value pair argument to 'auto', fitrgp determines the initial step size, s0, by using s0=0.5η0 0.1.

    s0 is the initial step vector, and η0 is the vector of unconstrained initial parameter values.

  • during optimization, fitrgp uses the initial step size, s0, as follows:

    if you use 'optimizer','quasinewton' with the initial step size, then the initial hessian approximation is g0s0i.

    if you use 'optimizer','lbfgs' with the initial step size, then the initial inverse-hessian approximation is s0g0i.

    g0 is the initial gradient vector, and i is the identity matrix.

references

[1] nash, w.j., t. l. sellers, s. r. talbot, a. j. cawthorn, and w. b. ford. "the population biology of abalone (haliotis species) in tasmania. i. blacklip abalone (h. rubra) from the north coast and islands of bass strait." sea fisheries division, technical report no. 48, 1994.

[2] waugh, s. "extending and benchmarking cascade-correlation: extensions to the cascade-correlation architecture and benchmarking of feed-forward supervised artificial neural networks." university of tasmania department of computer science thesis, 1995.

[3] lichman, m. uci machine learning repository, irvine, ca: university of california, school of information and computer science, 2013. http://archive.ics.uci.edu/ml.

[4] rasmussen, c. e. and c. k. i. williams. gaussian processes for machine learning. mit press. cambridge, massachusetts, 2006.

[5] lagarias, j. c., j. a. reeds, m. h. wright, and p. e. wright. "convergence properties of the nelder-mead simplex method in low dimensions." siam journal of optimization. vol. 9, number 1, 1998, pp. 112–147.

[6] nocedal, j. and s. j. wright. numerical optimization, second edition. springer series in operations research, springer verlag, 2006.

extended capabilities

version history

introduced in r2015b
网站地图