main content

nonlinear model predictive controller -凯发k8网页登录

nonlinear model predictive controller

description

a nonlinear model predictive controller computes optimal control moves across the prediction horizon using a nonlinear prediction model, a nonlinear cost function, and nonlinear constraints. for more information on nonlinear mpc, see .

creation

description

example

nlobj = nlmpc(nx,ny,nu) creates an nlmpc object whose prediction model has nx states, ny outputs, and nu inputs, where all inputs are manipulated variables. use this syntax if your model has no measured or unmeasured disturbance inputs.

nlobj = nlmpc(nx,ny,'mv',mvindex,'md',mdindex) creates an nlmpc object whose prediction model has measured disturbance inputs. specify the input indices for the manipulated variables, mvindex, and measured disturbances, mdindex.

nlobj = nlmpc(nx,ny,'mv',mvindex,'ud',udindex) creates an nlmpc object whose prediction model has unmeasured disturbance inputs. specify the input indices for the manipulated variables and unmeasured disturbances, udindex.

example

nlobj = nlmpc(nx,ny,'mv',mvindex,'md',mdindex,'ud',udindex) creates an nlmpc object whose prediction model has both measured and unmeasured disturbance inputs. specify the input indices for the manipulated variables, measured disturbances, and unmeasured disturbances.

input arguments

number of prediction model states, specified as a positive integer. this value is stored in the dimensions.numberofstates controller read-only property. you cannot change the number of states after creating the controller object.

example: 6

number of prediction model outputs, specified as a positive integer. this value is stored in the dimensions.numberofoutputs controller read-only property. you cannot change the number of outputs after creating the controller object.

example: 2

number of prediction model inputs, which are all set to be manipulated variables, specified as a positive integer. this value is stored in the dimensions.numberofinputs controller read-only property. you cannot change the number of manipulated variables after creating the controller object.

example: 4

manipulated variable indices, specified as a vector of positive integers. this value is stored in the dimensions.mvindex controller read-only property. you cannot change these indices after creating the controller object.

the combined set of indices from mvindex, mdindex, and udindex must contain all integers from 1 through nu, where nu is the number of prediction model inputs.

example: [1 3]

measured disturbance indices, specified as a vector of positive integers. this value is stored in the dimensions.mdindex controller read-only property. you cannot change these indices after creating the controller object.

the combined set of indices from mvindex, mdindex, and udindex must contain all integers from 1 through nu, where nu is the number of prediction model inputs.

example: 2

unmeasured disturbance indices, specified as a vector of positive integers. this value is stored in the dimensions.udindex controller read-only property. you cannot change these indices after creating the controller object.

the combined set of indices from mvindex, mdindex, and udindex must contain all integers from 1 through nu, where nu is the number of prediction model inputs.

example: 4

properties

prediction model sample time, specified as a positive finite scalar. the controller uses a discrete-time model with a sample time of ts for prediction. if you specify a continuous-time prediction model (model.iscontinuoustime is true), then the controller discretizes the model using the built-in implicit trapezoidal rule with a sample time of ts.

example: 0.1

prediction horizon steps, specified as a positive integer. the product of predictionhorizon and ts is the prediction time, that is, how far the controller looks into the future.

example: 15

control horizon, specified as one of the following:

  • positive integer, m, between 1 and p, inclusive, where p is equal to predictionhorizon. in this case, the controller computes m free control moves occurring at times k through k m–1, and holds the controller output constant for the remaining prediction horizon steps from k m through k p–1. here, k is the current control interval.

  • vector of positive integers [m1, m2, …], specifying the lengths of blocking intervals. by default the controller computes m blocks of free moves, where m is the number of blocking intervals. the first free move applies to times k through k m1–1, the second free move applies from time k m1 through k m1 m2–1, and so on. using block moves can improve the robustness of your controller. the sum of the values in controlhorizon must match the prediction horizon p. if you specify a vector whose sum is:

    • less than the prediction horizon, then the controller adds a blocking interval. the length of this interval is such that the sum of the interval lengths is p. for example, if p=10 and you specify a control horizon of controlhorizon=[1 2 3], then the controller uses four intervals with lengths [1 2 3 4].

    • greater than the prediction horizon, then the intervals are truncated until the sum of the interval lengths is equal to p. for example, if p=10 and you specify a control horizon of controlhorizon= [1 2 3 6 7], then the controller uses four intervals with lengths [1 2 3 4].

piecewise constant blocking moves are often too restrictive for optimal path planning applications. to produce a less-restrictive, better-conditioned nonlinear programming problem, you can specify piecewise linear manipulated variable blocking intervals. to do so, set the optimization.mvinterpolationorder property of your nlmpc controller object to 1.

for more information on how manipulated variable blocking works with different interpolation methods, see .

example: 3

this property is read-only.

prediction model dimensional information, specified when you create the controller and stored as a structure with the following fields.

number of states in the prediction model, specified as a positive integer. this value corresponds to nx.

example: 6

number of outputs in the prediction model, specified as a positive integer. this value corresponds to ny.

example: 1

number of inputs in the prediction model, specified as a positive integer. this value corresponds to either nu or the sum of the lengths of mvindex, mdindex, and udindex.

example: 3

manipulated variable indices for the prediction model, specified as a vector of positive integers. this value corresponds to mvindex.

example: [1 2]

measured disturbance indices for the prediction model, specified as a vector of positive integers. this value corresponds to mdindex.

example: 4

unmeasured disturbance indices for the prediction model, specified as a vector of positive integers. this value corresponds to udindex.

example: 3

prediction model, specified as a structure with the following fields.

state function, specified as a string, character vector, or function handle. for a continuous-time prediction model, statefcn is the state derivative function. for a discrete-time prediction model, statefcn is the state update function.

if your state function is continuous-time, the controller automatically discretizes the model using the implicit trapezoidal rule. this method can handle moderately stiff models, and its prediction accuracy depends on the controller sample time ts; that is, a large sample time leads to inaccurate prediction.

if the default discretization method does not provide satisfactory prediction for your application, you can specify your own discrete-time prediction model that uses a different method, such as the multistep forward euler rule.

you can specify your state function in one of the following ways:

  • name of a function in the current working folder or on the matlab® path, specified as a string or character vector

    model.statefcn = "mystatefunction";
  • handle to a local function, or a function defined in the current working folder or on the matlab path

    model.statefcn = @mystatefunction;

    for more information on local functions, see .

  • anonymous function

    model.statefcn = @(x,u,params) mystatefunction(x,u,params)

    for more information on anonymous functions, see .

note

only functions defined in a separate file in the current folder or on the matlab path are supported for c/c code generation. therefore, specifying state, output, cost, or constraint functions (or their jacobians) as local or anonymous functions is not recommended.

for more information, see .

example: "@transfcn"

output function, specified as a string, character vector, or function handle. if the number of states and outputs of the prediction model are the same, you can omit outputfcn, which implies that all states are measurable; that is, each output corresponds to one state.

note

your output function cannot have direct feedthrough from any manipulated variable to any output at any time.

you can specify your output function in one of the following ways:

  • name of a function in the current working folder or on the matlab path, specified as a string or character vector

    model.outputfcn = "myoutputfunction";
  • handle to a local function, or a function defined in the current working folder or on the matlab path

    model.outputfcn = @myoutputfunction;

    for more information on local functions, see .

  • anonymous function

    model.outputfcn = @(x,u,params) myoutputfunction(x,u,params)

    for more information on anonymous functions, see .

note

only functions defined in a separate file in the current folder or on the matlab path are supported for c/c code generation. therefore, specifying state, output, cost, or constraint functions (or their jacobians) as local or anonymous functions is not recommended.

for more information, see .

example: "@outfcn"

option to indicate prediction model time domain, specified as one of the following:

  • true — continuous-time prediction model. in this case, the controller automatically discretizes the model during prediction using ts.

  • false — discrete-time prediction model. in this case, ts is the sample time of the model.

note

iscontinuoustime must be consistent with the functions specified in model.statefcn and model.outputfcn.

if iscontinuoustime is true, statefcn must return the derivative of the state with respect to time, at the current time. otherwise statefcn must return the state at the next control interval.

example: true

number of optional model parameters used by the prediction model, custom cost function, custom constraints, passivity functions, specified as a nonnegative integer. the number of parameters includes all the parameters used by these functions. for example, if the state function uses only parameter p1, the constraint functions use only parameter p2, and the cost function uses only parameter p3, then numberofparameters is 3.

example: 1

state information, bounds, and scale factors, specified as a structure array with nx elements, where nx is the number of states. each structure element has the following fields.

state lower bound, specified as a scalar or vector. by default, this lower bound is -inf.

to use the same bound across the prediction horizon, specify a scalar value.

to vary the bound over the prediction horizon from time k 1 to time k p, specify a vector of up to p values. here, k is the current time and p is the prediction horizon. if you specify fewer than p values, the final bound is used for the remaining steps of the prediction horizon.

state bounds are always hard constraints.

example: [-20 -18 -15]

state upper bound, specified as a scalar or vector. by default, this upper bound is inf.

to use the same bound across the prediction horizon, specify a scalar value.

to vary the bound over the prediction horizon from time k 1 to time k p, specify a vector of up to p values. here, k is the current time and p is the prediction horizon. if you specify fewer than p values, the final bound is used for the remaining steps of the prediction horizon.

state bounds are always hard constraints.

example: [20 15]

state name, specified as a string or character vector. the default state name is "x#", where # is its state index.

example: "speed"

state units, specified as a string or character vector.

example: "m/s"

state scale factor, specified as a positive finite scalar. in general, use the operating range of the state. specifying the proper scale factor can improve numerical conditioning for optimization.

example: 10

output variable (ov) information, bounds, and scale factors, specified as a structure array with ny elements, where ny is the number of output variables. to access this property, you can use the alias ov instead of outputvariables.

each structure element has the following fields.

ov lower bound, specified as a scalar or vector. by default, this lower bound is -inf.

to use the same bound across the prediction horizon, specify a scalar value.

to vary the bound over the prediction horizon from time k 1 to time k p, specify a vector of up to p values. here, k is the current time and p is the prediction horizon. if you specify fewer than p values, the final bound is used for the remaining steps of the prediction horizon.

example: [-10 -8]

ov upper bound, specified as a scalar or vector. by default, this upper bound is inf.

to use the same bound across the prediction horizon, specify a scalar value.

to vary the bound over the prediction horizon from time k 1 to time k p, specify a vector of up to p values. here, k is the current time and p is the prediction horizon. if you specify fewer than p values, the final bound is used for the remaining steps of the prediction horizon.

example: [12 10 8]

ov lower bound softness, where a larger ecr value indicates a softer constraint, specified as a nonnegative finite scalar or vector. by default, ov upper bounds are soft constraints.

to use the same ecr value across the prediction horizon, specify a scalar value.

to vary the ecr value over the prediction horizon from time k 1 to time k p, specify a vector of up to p values. here, k is the current time and p is the prediction horizon. if you specify fewer than p values, the final ecr value is used for the remaining steps of the prediction horizon.

example: [2 1 0.5]

ov upper bound softness, where a larger ecr value indicates a softer constraint, specified as a nonnegative finite scalar or vector. by default, ov lower bounds are soft constraints.

to use the same ecr value across the prediction horizon, specify a scalar value.

to vary the ecr value over the prediction horizon from time k 1 to time k p, specify a vector of up to p values. here, k is the current time and p is the prediction horizon. if you specify fewer than p values, the final ecr value is used for the remaining steps of the prediction horizon.

example: [5 2 1]

ov name, specified as a string or character vector. the default ov name is "y#", where # is its output index.

example: "attack angle"

ov units, specified as a string or character vector.

example: "degrees"

ov scale factor, specified as a positive finite scalar. in general, use the operating range of the output variable. specifying the proper scale factor can improve numerical conditioning for optimization.

example: 90

manipulated variable (mv) information, bounds, and scale factors, specified as a structure array with nmv elements, where nmv is the number of manipulated variables. to access this property, you can use the alias mv instead of manipulatedvariables.

each structure element has the following fields.

mv lower bound, specified as a scalar or vector. by default, this lower bound is -inf.

to use the same bound across the prediction horizon, specify a scalar value.

to vary the bound over the prediction horizon from time k to time k p–1, specify a vector of up to p values. here, k is the current time and p is the prediction horizon. if you specify fewer than p values, the final bound is used for the remaining steps of the prediction horizon.

example: [-1.1 -1]

mv upper bound, specified as a scalar or vector. by default, this upper bound is inf.

to use the same bound across the prediction horizon, specify a scalar value.

to vary the bound over the prediction horizon from time k to time k p–1, specify a vector of up to p values. here, k is the current time and p is the prediction horizon. if you specify fewer than p values, the final bound is used for the remaining steps of the prediction horizon.

example: [1.2 1]

mv lower bound softness, where a larger ecr value indicates a softer constraint, specified as a nonnegative scalar or vector. by default, mv lower bounds are hard constraints.

to use the same ecr value across the prediction horizon, specify a scalar value.

to vary the ecr value over the prediction horizon from time k to time k p–1, specify a vector of up to p values. here, k is the current time and p is the prediction horizon. if you specify fewer than p values, the final ecr value is used for the remaining steps of the prediction horizon.

example: [0.1 0]

mv upper bound softness, where a larger ecr value indicates a softer constraint, specified as a nonnegative scalar or vector. by default, mv upper bounds are hard constraints.

to use the same ecr value across the prediction horizon, specify a scalar value.

to vary the ecr value over the prediction horizon from time k to time k p–1, specify a vector of up to p values. here, k is the current time and p is the prediction horizon. if you specify fewer than p values, the final ecr value is used for the remaining steps of the prediction horizon.

example: [0.5 0.2]

mv rate of change lower bound, specified as a nonpositive scalar or vector. the mv rate of change is defined as mv(k) - mv(k–1), where k is the current time. by default, this lower bound is -inf.

to use the same bound across the prediction horizon, specify a scalar value.

to vary the bound over the prediction horizon from time k to time k p–1, specify a vector of up to p values. here, k is the current time and p is the prediction horizon. if you specify fewer than p values, the final bound is used for the remaining steps of the prediction horizon.

example: [-50 -20]

mv rate of change upper bound, specified as a nonnegative scalar or vector. the mv rate of change is defined as mv(k) - mv(k–1), where k is the current time. by default, this upper bound is inf.

to use the same bound across the prediction horizon, specify a scalar value.

to vary the bound over the prediction horizon from time k to time k p–1, specify a vector of up to p values. here, k is the current time and p is the prediction horizon. if you specify fewer than p values, the final bound is used for the remaining steps of the prediction horizon.

example: [50 20]

mv rate of change lower bound softness, where a larger ecr value indicates a softer constraint, specified as a nonnegative finite scalar or vector. by default, mv rate of change lower bounds are hard constraints.

to use the same ecr value across the prediction horizon, specify a scalar value.

to vary the ecr values over the prediction horizon from time k to time k p–1, specify a vector of up to p values. here, k is the current time and p is the prediction horizon. if you specify fewer than p values, the final ecr values are used for the remaining steps of the prediction horizon.

example: [0.1 0]

mv rate of change upper bound softness, where a larger ecr value indicates a softer constraint, specified as a nonnegative finite scalar or vector. by default, mv rate of change upper bounds are hard constraints.

to use the same ecr value across the prediction horizon, specify a scalar value.

to vary the ecr values over the prediction horizon from time k to time k p–1, specify a vector of up to p values. here, k is the current time and p is the prediction horizon. if you specify fewer than p values, the final ecr values are used for the remaining steps of the prediction horizon.

example: [1 0.5 0.2]

mv name, specified as a string or character vector. the default mv name is "u#", where # is its input index.

example: "rudder angle"

mv units, specified as a string or character vector.

example: "degrees"

mv scale factor, specified as a positive finite scalar. in general, use the operating range of the manipulated variable. specifying the proper scale factor can improve numerical conditioning for optimization.

example: 60

measured disturbance (md) information and scale factors, specified as a structure array with nmd elements, where nmd is the number of measured disturbances. if your model does not have measured disturbances, then measureddisturbances is []. to access this property, you can use the alias md instead of measureddisturbances.

each structure element has the following fields.

md name, specified as a string or character vector. the default md name is "u#", where # is its input index.

example: "wind speed"

md units, specified as a string or character vector.

example: "m/s"

md scale factor, specified as a positive finite scalar. in general, use the operating range of the disturbance. specifying the proper scale factor can improve numerical conditioning for optimization.

example: 10

standard cost function tuning weights, specified as a structure. the controller applies these weights to the scaled variables. therefore, the tuning weights are dimensionless values.

note

if you define a custom cost function using optimization.customcostfcn and set optimization.replacestandardcost to true, then the controller ignores the standard cost function tuning weights in weights.

weights has the following fields.

manipulated variable tuning weights, which penalize deviations from mv targets, specified as a row vector or array of nonnegative values. the default weight for all manipulated variables is 0.

to use the same weights across the prediction horizon, specify a row vector of length nmv, where nmv is the number of manipulated variables.

to vary the tuning weights over the prediction horizon from time k to time k p-1, specify an array with nmv columns and up to p rows. here, k is the current time and p is the prediction horizon. each row contains the manipulated variable tuning weights for one prediction horizon step. if you specify fewer than p rows, the weights in the final row are used for the remaining steps of the prediction horizon.

to specify mv targets at run time, in simulink®, pass the target values to the block. in matlab, pass the target values to a simulation function (such as , using the mvtarget property of an object).

example: [0.1 0.2]

manipulated variable rate tuning weights, which penalize large changes in control moves, specified as a row vector or array of nonnegative values. the default weight for all manipulated variable rates is 0.1.

to use the same weights across the prediction horizon, specify a row vector of length nmv, where nmv is the number of manipulated variables.

to vary the tuning weights over the prediction horizon from time k to time k p-1, specify an array with nmv columns and up to p rows. here, k is the current time and p is the prediction horizon. each row contains the manipulated variable rate tuning weights for one prediction horizon step. if you specify fewer than p rows, the weights in the final row are used for the remaining steps of the prediction horizon.

example: [0.1 0.1]

output variable tuning weights, which penalize deviation from output references, specified as a row vector or array of nonnegative values. the default weight for all output variables is 1.

to use the same weights across the prediction horizon, specify a row vector of length ny, where ny is the number of output variables.

to vary the tuning weights over the prediction horizon from time k 1 to time k p, specify an array with ny columns and up to p rows. here, k is the current time and p is the prediction horizon. each row contains the output variable tuning weights for one prediction horizon step. if you specify fewer than p rows, the weights in the final row are used for the remaining steps of the prediction horizon.

example: [0.1 0.1]

slack variable tuning weight, specified as a positive scalar.

example: 1e4

custom optimization functions and solver, specified as a structure with the following fields.

custom cost function, specified as one of the following:

  • name of a function in the current working folder or on the matlab path, specified as a string or character vector

    optimization.customcostfcn = "mycostfunction";
  • handle to a local function, or a function defined in the current working folder or on the matlab path

    optimization.customcostfcn = @mycostfunction;

    for more information on local functions, see .

  • anonymous function

    optimization.customcostfcn = @(x,u,e,data,params) mycostfunction(x,u,e,data,params);

    for more information on anonymous functions, see .

note

only functions defined in a separate file in the current folder or on the matlab path are supported for c/c code generation. therefore, specifying state, output, cost, or constraint functions (or their jacobians) as local or anonymous functions is not recommended.

your cost function must have the signature:

function j = mycostfunction(x,u,e,data,params)

for more information, see .

example: @costfcn

option to replace the standard cost function with the custom cost function, specified as one of the following:

  • true — the controller uses the custom cost alone as the objective function during optimization. in this case, the weights property of the controller is ignored.

  • false — the controller uses the sum of the standard cost and custom cost as the objective function during optimization.

if you do not specify a custom cost function using customcostfcn, then the controller ignores repalcestandardcost.

for more information, see .

example: true

custom equality constraint function, specified as one of the following:

  • name of a function in the current working folder or on the matlab path, specified as a string or character vector

    optimization.customeqconfcn = "myeqconfunction";
  • handle to a local function, or a function defined in the current working folder or on the matlab path

    optimization.customeqconfcn = @myeqconfunction;

    for more information on local functions, see .

  • anonymous function

    optimization.customeqconfcn = @(x,u,data,params) myeqconfunction(x,u,data,params);

    for more information on anonymous functions, see .

note

only functions defined in a separate file in the current folder or on the matlab path are supported for c/c code generation. therefore, specifying state, output, cost, or constraint functions (or their jacobians) as local or anonymous functions is not recommended.

your equality constraint function must have the signature:

function ceq = myeqconfunction(x,u,data,params)

for more information, see .

example: @eqfcn

custom inequality constraint function, specified as one of the following:

  • name of a function in the current working folder or on the matlab path, specified as a string or character vector

    optimization.customineqconfcn = "myineqconfunction";
  • handle to a local function, or a function defined in the current working folder or on the matlab path

    optimization.customineqconfcn = @myineqconfunction;

    for more information on local functions, see .

  • anonymous function

    optimization.customineqconfcn = @(x,u,e,data,params) myineqconfunction(x,u,e,data,params);

    for more information on anonymous functions, see .

note

only functions defined in a separate file in the current folder or on the matlab path are supported for c/c code generation. therefore, specifying state, output, cost, or constraint functions (or their jacobians) as local or anonymous functions is not recommended.

your equality constraint function must have the signature:

function cineq = myineqconfunction(x,u,e,data,params)

for more information, see .

example: @ineqfcn

custom nonlinear programming solver function, specified as a string, character vector, or function handle. if you do not have optimization toolbox™ software, you must specify your own custom nonlinear programming solver. you can specify your custom solver function in one of the following ways:

  • name of a function in the current working folder or on the matlab path, specified as a string or character vector

    optimization.customsolverfcn = "mynlpsolver";
  • handle to a local function, or a function defined in the current working folder or on the matlab path

    optimization.customsolverfcn = @mynlpsolver;

for more information, see configure optimization solver for nonlinear mpc.

example: @mysolver

solver options, specified as an options object for fmincon or [].

if you have optimization toolbox software, solveroptions contains an options object for the fmincon solver.

if you do not have optimization toolbox, solveroptions is [].

for more information, see configure optimization solver for nonlinear mpc.

option to simulate as a linear controller, specified as one of the following:

  • "off" — simulate the controller as a nonlinear controller with a nonlinear prediction model.

  • "adaptive" — for each control interval, a linear model is obtained from the specified nonlinear state and output functions at the current operating point and used across the prediction horizon. to determine if an adaptive mpc controller provides comparable performance to the nonlinear controller, use this option. for more information on adaptive mpc, see .

  • "timevarying" — for each control interval, p linear models are obtained from the specified nonlinear state and output functions at the p operating points predicted from the previous interval, one for each prediction horizon step. to determine if a linear time-varying mpc controller provides comparable performance to the nonlinear controller, use this option. for more information on time-varying mpc, see .

to use the either the "adaptive" or "timevarying" option, your controller must have no custom constraints and no custom cost function.

for an example that simulates a nonlinear mpc controller as a linear controller, see .

example: "adaptive"

option to accept a suboptimal solution, specified as a logical value. when the nonlinear programming solver reaches the maximum number of iterations without finding a solution (the exit flag is 0), the controller:

  • freezes the mv values if usesuboptimalsolution is false

  • applies the suboptimal solution found by the solver after the final iteration if usesuboptimalsolution is true

to specify the maximum number of iterations, use optimization.solveroptions.maxiter.

example: true

linear interpolation order used by block moves, specified as one of the following:

  • 0 — use piecewise constant manipulated variable intervals.

  • 1 — use piecewise linear manipulated variable intervals.

if the control horizon is a scalar, then the controller ignores mvinterpolationorder.

for more information on manipulated variable blocking, see .

example: 1

jacobians of model functions, and custom cost and constraint functions, specified as a structure. as a best practice, use jacobians whenever they are available, since they improve optimization efficiency. if you do not specify a jacobian for a given function, the nonlinear programming solver must numerically compute the jacobian.

the jacobian structure contains the following fields.

jacobian of state function z from model.statefcn, specified as one of the following

  • name of a function in the current working folder or on the matlab path, specified as a string or character vector

    model.statefcn = "mystatejacobian";
  • handle to a local function, or a function defined in the current working folder or on the matlab path

    model.statefcn = @mystatejacobian;

    for more information on local functions, see .

  • anonymous function

    model.statefcn = @(x,u,params) mystatejacobian(x,u,params)

    for more information on anonymous functions, see .

note

only functions defined in a separate file in the current folder or on the matlab path are supported for c/c code generation. therefore, specifying state, output, cost, or constraint functions (or their jacobians) as local or anonymous functions is not recommended.

for more information, see .

example: @afcn

jacobian of output function y from model.outputfcn, specified as one of the following:

  • name of a function in the current working folder or on the matlab path, specified as a string or character vector

    model.statefcn = "myoutputjacobian";
  • handle to a local function, or a function defined in the current working folder or on the matlab path

    model.statefcn = @myoutputjacobian;

    for more information on local functions, see .

  • anonymous function

    model.statefcn = @(x,u,params) myoutputjacobian(x,u,params)

note

only functions defined in a separate file in the current folder or on the matlab path are supported for c/c code generation. therefore, specifying state, output, cost, or constraint functions (or their jacobians) as local or anonymous functions is not recommended.

for more information, see .

example: @cfcn

jacobian of custom cost function j from optimization.customcostfcn, specified as one of the following:

  • name of a function in the current working folder or on the matlab path, specified as a string or character vector

    jacobian.customcostfcn = "mycostjacobian";
  • handle to a local function, or a function defined in the current working folder or on the matlab path

    jacobian.customcostfcn = @mycostjacobian;

    for more information on local functions, see .

  • anonymous function

    jacobian.customcostfcn = @(x,u,e,data,params) mycostjacobian(x,u,e,data,params)

    for more information on anonymous functions, see .

note

only functions defined in a separate file in the current folder or on the matlab path are supported for c/c code generation. therefore, specifying state, output, cost, or constraint functions (or their jacobians) as local or anonymous functions is not recommended.

your cost jacobian function must have the signature:

function [g,gmv,ge] = mycostjacobian(x,u,e,data,params)

for more information, see .

example: @costjacfcn

jacobian of custom equality constraints ceq from optimization.customeqconfcn, specified as one of the following:

  • name of a function in the current working folder or on the matlab path, specified as a string or character vector

    jacobian.customeqconfcn = "myeqconjacobian";
  • handle to a local function, or a function defined in the current working folder or on the matlab path

    jacobian.customeqconfcn = @myeqconjacobian;

    for more information on local functions, see .

  • anonymous function

    jacobian.customeqconfcn = @(x,u,data,params) myeqconjacobian(x,u,data,params);

    for more information on anonymous functions, see .

note

only functions defined in a separate file in the current folder or on the matlab path are supported for c/c code generation. therefore, specifying state, output, cost, or constraint functions (or their jacobians) as local or anonymous functions is not recommended.

your equality constraint jacobian function must have the signature:

function [g,gmv] = myeqconjacobian(x,u,data,params)

for more information, see .

example: @eqjacfcn

jacobian of custom inequality constraints c from optimization.customineqconfcn, specified as one of the following:

  • name of a function in the current working folder or on the matlab path, specified as a string or character vector

    jacobian.customeqconfcn = "myineqconjacobian";
  • handle to a local function, or a function defined in the current working folder or on the matlab path

    jacobian.customeqconfcn = @myineqconjacobian;

    for more information on local functions, see .

  • anonymous function

    jacobian.customeqconfcn = @(x,u,data,params) myineqconjacobian(x,u,data,params);

    for more information on anonymous functions, see .

note

only functions defined in a separate file in the current folder or on the matlab path are supported for c/c code generation. therefore, specifying state, output, cost, or constraint functions (or their jacobians) as local or anonymous functions is not recommended.

your inequality constraint jacobian function must have the signature:

function [g,gmv,ge] = myineqconjacobian(x,u,data,params)

for more information, see .

example: @ineqjacfcn

passivity constraints, specified as a structure with the following fields.

when your nonlinear mpc controller is configured to use passivity constraints, at each step the optimization algorithm tries to enforce the inequality constraints:

yp(x,u)tup(x,u) νyyp(x,u)typ(x,u) νuup(x,u)tup(x,u)0.

here, νy is the output passivity index, νu is the input passivity index, up(x,u) is the passivity input function, and yp(x,u) is the passivity output function. the variables x and u are the current state and input of the prediction model.

assuming that the plant is already passive with respect to the input-output pair up and yp, if these two inequalities are verified, then (under mild conditions) the resulting closed loop system tends to dissipate energy over time, and therefore has a stable equilibrium. for more information on passivity see and, in the context of linear systems, . for examples, see and .

option to enforce constraints, specified as one of the following:

  • true — passivity constraints are enforced during optimization. in this case, you must specify the outputfcn and inputfcn properties.

  • false — passivity constraints are not enforced during optimization.

example: true

desired output passivity index for the controller, specified as a nonnegative scalar.

if passivity.enforceconstraint is true, at each step the optimization algorithm tries to enforce the passivity inequality constraint, which involves the passivity index νy specified in passivity.oututpassivityindex.

example: 1

desired output passivity index for the controller, specified as a nonnegative scalar.

if passivity.enforceconstraint is true, at each step the optimization algorithm tries to enforce the passivity inequality constraint, which involves the passivity index νu specified in passivity.inputpassivityindex.

example: 1

passivity output function, specified as a string, character vector, or function handle.

if passivity.enforceconstraint is true then at each step the optimization algorithm tries to enforce the input and output inequality constraints, which involve the function yp(x,u) specified in passivity.outputfcn.

you can specify your passivity output function as one of the following:

  • name of a function in the current working folder or on the matlab path, specified as a string or character vector

    passivity.outputfcn = "mypassivityoutputfcn";
  • handle to a local function, or a function defined in the current working folder or on the matlab path

    passivity.outputfcn = @mypassivityoutputfcn;

    for more information on local functions, see .

  • anonymous function

    passivity.outputfcn = @(x,u,params) mypassivityoutputfcn(x,u,params)

    for more information on anonymous functions, see .

note

only functions defined in a separate file in the current folder or on the matlab path are supported for c/c code generation. therefore, specifying state, output, cost, or constraint functions (or their jacobians) as local or anonymous functions is not recommended.

here, x and u are the prediction model states and inputs, respectively, and params is an optional comma separated list of parameters (for example p1,p2,p3) that might be needed by the function you specify. if any of your functions use optional parameters, you must specify the number of parameters using model.numberofparameters. at run time, in simulink, you then pass these parameters to the block. in matlab, you pass the parameters to a simulation function (such as , using an option set object).

example: @ypfcn

passivity output function, specified as a string, character vector, or function handle. if passivity.enforceconstraint is true then at each step the optimization algorithm tries to enforce the input and output inequality constraints, which involve the function up(x,u) specified in passivity.outputfcn.

you can specify your passivity input function as one of the following:

  • name of a function in the current working folder or on the matlab path, specified as a string or character vector

    passivity.inputfcn = "mypassivityinputfcn";
  • handle to a local function, or a function defined in the current working folder or on the matlab path

    passivity.inputfcn = @mypassivityinputfcn;

    for more information on local functions, see .

  • anonymous function

    passivity.inputfcn = @(x,u,params) mypassivityinputfcn(x,u,params)

    for more information on anonymous functions, see .

note

only functions defined in a separate file in the current folder or on the matlab path are supported for c/c code generation. therefore, specifying state, output, cost, or constraint functions (or their jacobians) as local or anonymous functions is not recommended.

here, x and u are the prediction model states and inputs, respectively, and params is an optional comma separated list of parameters (for example p1,p2,p3) that might be needed by the function you specify. if any of your functions use optional parameters, you must specify the number of parameters using model.numberofparameters. at run time, in simulink, you then pass these parameters to the block. in matlab, you pass the parameters to a simulation function (such as , using an option set object).

example: @upfcn

jacobian of the passivity output function passivity.outputfcn, specified as one of the following:

  • name of a function in the current working folder or on the matlab path, specified as a string or character vector

    passivity.outputjacobianfcn = "mypsvoutjacfcn";
  • handle to a local function, or a function defined in the current working folder or on the matlab path

    passivity.outputjacobianfcn = @mypsvoutjacfcn;

    for more information on local functions, see .

  • anonymous function

    passivity.outputjacobianfcn = @(x,u,params) mypsvoutjacfcn(x,u,params)

    for more information on anonymous functions, see .

note

only functions defined in a separate file in the current folder or on the matlab path are supported for c/c code generation. therefore, specifying state, output, cost, or constraint functions (or their jacobians) as local or anonymous functions is not recommended.

here, x and u are the prediction model states and outputs, respectively, and params is an optional comma separated list (for example p1,p2,p3) of parameters that might be needed by the function you specify. if any of your function use optional parameters, you must specify the number of parameters using model.numberofparameters. at run time, in simulink, you then pass these parameters to the block. in matlab, you pass the parameters to a simulation function (such as , using an ).

the function specified in passivity.outputjacobianfcn (if any) must return as a first output argument the jacobian matrix of the output passivity function with respect to the current state (an nyp by nx matrix), and as a second output argument the jacobian matrix of the output passivity function with respect to the manipulated variables (an nyp by nmv matrix).

here, nx is the number of state variables of the prediction model, nmv is the number of manipulated variables and nyp is the number of outputs of the passivity output function.

example: @ypjac

jacobian of the passivity input function passivity.inputfcn, specified as one of the following

  • name of a function in the current working folder or on the matlab path, specified as a string or character vector

    passivity.inputjacobianfcn = "mypsvinjacfcn";
  • handle to a local function, or a function defined in the current working folder or on the matlab path

    passivity.inputjacobianfcn = @mypsvinjacfcn;

    for more information on local functions, see .

  • anonymous function

    passivity.inputjacobianfcn = @(x,u,params) mypsvinjacfcn(x,u,params)

    for more information on anonymous functions, see .

note

only functions defined in a separate file in the current folder or on the matlab path are supported for c/c code generation. therefore, specifying state, output, cost, or constraint functions (or their jacobians) as local or anonymous functions is not recommended.

here, x and u are the prediction model states and outputs, respectively, and params is an optional comma separated list (for example p1,p2,p3) of parameters that might be needed by the function you specify. if any of your function use optional parameters, you must specify the number of parameters using model.numberofparameters. at run time, in simulink, you then pass these parameters to the block. in matlab, you pass the parameters to a simulation function (such as , using an ).

the function specified in passivity.inputjacobianfcn (if any) must return as a first output argument the jacobian of the input passivity function with respect to the current state (an nup by nx matrix), and as a second output argument the jacobian of the input passivity function with respect to the manipulated variables (an nup by nmv matrix).

here, nx is the number of state variables of the prediction model, nmv is the number of manipulated variables and nup is the number of outputs of the passivity input function.

example: @upfcn

option to use predicted or current state, specified as one of the following:

  • truex[k 1] is a decision variable in the optimization problem.

  • falsex[k] is a decision variable in the optimization problem.

example: true

object functions

compute optimal control action for nonlinear mpc controller
examine prediction model and custom functions of nlmpc or nlmpcmultistage objects for potential problems
convert nlmpc object into one or more mpc objects
create simulink bus object and configure bus creator block for passing model parameters to nonlinear mpc controller block

examples

create a nonlinear mpc controller with four states, two outputs, and one input.

nx = 4;
ny = 2;
nu = 1;
nlobj = nlmpc(nx,ny,nu);
zero weights are applied to one or more ovs because there are fewer mvs than ovs.

specify the sample time and horizons of the controller.

ts = 0.1;
nlobj.ts = ts;
nlobj.predictionhorizon = 10;
nlobj.controlhorizon = 5;

specify the state function for the controller, which is in the file pendulumdt0.m. this discrete-time model integrates the continuous time model defined in pendulumct0.m using a multistep forward euler method.

nlobj.model.statefcn = "pendulumdt0";
nlobj.model.iscontinuoustime = false;

the discrete-time state function uses an optional parameter, the sample time ts, to integrate the continuous-time model. therefore, you must specify the number of optional parameters as 1.

nlobj.model.numberofparameters = 1;

specify the output function for the controller. in this case, define the first and third states as outputs. even though this output function does not use the optional sample time parameter, you must specify the parameter as an input argument (ts).

nlobj.model.outputfcn = @(x,u,ts) [x(1); x(3)];

validate the prediction model functions for nominal states x0 and nominal inputs u0. since the prediction model uses a custom parameter, you must pass this parameter to validatefcns.

x0 = [0.1;0.2;-pi/2;0.3];
u0 = 0.4;
validatefcns(nlobj, x0, u0, [], {ts});
model.statefcn is ok.
model.outputfcn is ok.
analysis of user-provided model, cost, and constraint functions complete.

create a nonlinear mpc controller with three states, one output, and four inputs. the first two inputs are measured disturbances, the third input is the manipulated variable, and the fourth input is an unmeasured disturbance.

nlobj = nlmpc(3,1,'mv',3,'md',[1 2],'ud',4);

to view the controller state, output, and input dimensions and indices, use the dimensions property of the controller.

nlobj.dimensions
ans = struct with fields:
     numberofstates: 3
    numberofoutputs: 1
     numberofinputs: 4
            mvindex: 3
            mdindex: [1 2]
            udindex: 4

specify the controller sample time and horizons.

nlobj.ts = 0.5;
nlobj.predictionhorizon = 6;
nlobj.controlhorizon = 3;

specify the prediction model state function, which is in the file exocstrstatefcnct.m.

nlobj.model.statefcn = 'exocstrstatefcnct';

specify the prediction model output function, which is in the file exocstroutputfcn.m.

nlobj.model.outputfcn = 'exocstroutputfcn';

validate the prediction model functions using the initial operating point as the nominal condition for testing and setting the unmeasured disturbance state, x0(3), to 0. since the model has measured disturbances, you must pass them to validatefcns.

x0 = [311.2639; 8.5698; 0];
u0 = [10; 298.15; 298.15];
validatefcns(nlobj,x0,u0(3),u0(1:2)');
model.statefcn is ok.
model.outputfcn is ok.
analysis of user-provided model, cost, and constraint functions complete.

create nonlinear mpc controller with six states, six outputs, and four inputs.

nx = 6;
ny = 6;
nu = 4;
nlobj = nlmpc(nx,ny,nu);
zero weights are applied to one or more ovs because there are fewer mvs than ovs.

specify the controller sample time and horizons.

ts = 0.4;
p = 30;
c = 4;
nlobj.ts = ts;
nlobj.predictionhorizon = p;
nlobj.controlhorizon = c;

specify the prediction model state function and the jacobian of the state function. for this example, use a model of a flying robot.

nlobj.model.statefcn = "flyingrobotstatefcn";
nlobj.jacobian.statefcn = "flyingrobotstatejacobianfcn";

specify a custom cost function for the controller that replaces the standard cost function.

nlobj.optimization.customcostfcn = @(x,u,e,data) ts*sum(sum(u(1:p,:)));
nlobj.optimization.replacestandardcost = true;

specify a custom constraint function for the controller.

nlobj.optimization.customeqconfcn = @(x,u,data) x(end,:)';

validate the prediction model and custom functions at the initial states (x0) and initial inputs (u0) of the robot.

x0 = [-10;-10;pi/2;0;0;0];
u0 = zeros(nu,1); 
validatefcns(nlobj,x0,u0);
model.statefcn is ok.
jacobian.statefcn is ok.
no output function specified. assuming "y = x" in the prediction model.
optimization.customcostfcn is ok.
optimization.customeqconfcn is ok.
analysis of user-provided model, cost, and constraint functions complete.

create a nonlinear mpc controller with four states, one output variable, one manipulated variable, and one measured disturbance.

nlobj = nlmpc(4,1,'mv',1,'md',2);

specify the controller sample time and horizons.

nlobj.predictionhorizon = 10;
nlobj.controlhorizon = 3;

specify the state function of the prediction model.

nlobj.model.statefcn = 'oxidationstatefcn';

specify the prediction model output function and the output variable scale factor.

nlobj.model.outputfcn = @(x,u) x(3);
nlobj.outputvariables.scalefactor = 0.03;

specify the manipulated variable constraints and scale factor.

nlobj.manipulatedvariables.min = 0.0704;
nlobj.manipulatedvariables.max = 0.7042;
nlobj.manipulatedvariables.scalefactor = 0.6;

specify the measured disturbance scale factor.

nlobj.measureddisturbances.scalefactor = 0.5;

compute the state and input operating conditions for three linear mpc controllers using the fsolve function.

options = optimoptions('fsolve','display','none');
ulow = [0.38 0.5];
xlow = fsolve(@(x) oxidationstatefcn(x,ulow),[1 0.3 0.03 1],options);
umedium = [0.24 0.5];
xmedium = fsolve(@(x) oxidationstatefcn(x,umedium),[1 0.3 0.03 1],options);
uhigh = [0.15 0.5];
xhigh = fsolve(@(x) oxidationstatefcn(x,uhigh),[1 0.3 0.03 1],options);

create linear mpc controllers for each of these nominal conditions.

mpcobjlow = converttompc(nlobj,xlow,ulow);
mpcobjmedium = converttompc(nlobj,xmedium,umedium);
mpcobjhigh = converttompc(nlobj,xhigh,uhigh);

you can also create multiple controllers using arrays of nominal conditions. the number of rows in the arrays specifies the number controllers to create. the linear controllers are returned as cell array of mpc objects.

u = [ulow; umedium; uhigh];
x = [xlow; xmedium; xhigh];
mpcobjs = converttompc(nlobj,x,u);

view the properties of the mpcobjlow controller.

mpcobjlow
 
mpc object (created on 19-aug-2023 23:31:32):
---------------------------------------------
sampling time:      1 (seconds)
prediction horizon: 10
control horizon:    3
plant model:        
                                      --------------
      1  manipulated variable(s)   -->|  4 states  |
                                      |            |-->  1 measured output(s)
      1  measured disturbance(s)   -->|  2 inputs  |
                                      |            |-->  0 unmeasured output(s)
      0  unmeasured disturbance(s) -->|  1 outputs |
                                      --------------
indices:
  (input vector)    manipulated variables: [1 ]
                    measured disturbances: [2 ]
  (output vector)        measured outputs: [1 ]
disturbance and noise models:
        output disturbance model: default (type "getoutdist(mpcobjlow)" for details)
         measurement noise model: default (unity gain after scaling)
weights:
        manipulatedvariables: 0
    manipulatedvariablesrate: 0.1000
             outputvariables: 1
                         ecr: 100000
state estimation:  default kalman filter (type "getestimator(mpcobjlow)" for details)
constraints:
 0.0704 <= u1 <= 0.7042, u1/rate is unconstrained, y1 is unconstrained
use built-in "active-set" qp solver with maxiterations of 120.

create a nonlinear mpc controller with six states, six outputs, and four inputs.

nx = 6;
ny = 6;
nu = 4;
nlobj = nlmpc(nx,ny,nu);
zero weights are applied to one or more ovs because there are fewer mvs than ovs.

specify the controller sample time and horizons.

ts = 0.4;
p = 30;
c = 4;
nlobj.ts = ts;
nlobj.predictionhorizon = p;
nlobj.controlhorizon = c;

specify the prediction model state function and the jacobian of the state function. for this example, use a model of a flying robot.

nlobj.model.statefcn = "flyingrobotstatefcn";
nlobj.jacobian.statefcn = "flyingrobotstatejacobianfcn";

specify a custom cost function for the controller that replaces the standard cost function.

nlobj.optimization.customcostfcn = @(x,u,e,data) ts*sum(sum(u(1:p,:)));
nlobj.optimization.replacestandardcost = true;

specify a custom constraint function for the controller.

nlobj.optimization.customeqconfcn = @(x,u,data) x(end,:)';

specify linear constraints on the manipulated variables.

for ct = 1:nu
    nlobj.mv(ct).min = 0;
    nlobj.mv(ct).max = 1;
end

validate the prediction model and custom functions at the initial states (x0) and initial inputs (u0) of the robot.

x0 = [-10;-10;pi/2;0;0;0];
u0 = zeros(nu,1); 
validatefcns(nlobj,x0,u0);
model.statefcn is ok.
jacobian.statefcn is ok.
no output function specified. assuming "y = x" in the prediction model.
optimization.customcostfcn is ok.
optimization.customeqconfcn is ok.
analysis of user-provided model, cost, and constraint functions complete.

compute the optimal state and manipulated variable trajectories, which are returned in the info.

[~,~,info] = nlmpcmove(nlobj,x0,u0);
slack variable unused or zero-weighted in your custom cost function.
all constraints will be hard.

plot the optimal trajectories.

flyingrobotplotplanning(info,ts)
optimal fuel consumption =   1.884953

figure contains 6 axes objects. axes object 1 with title x contains an object of type line. axes object 2 with title y contains an object of type line. axes object 3 with title theta contains an object of type line. axes object 4 with title vx contains an object of type line. axes object 5 with title vy contains an object of type line. axes object 6 with title omega contains an object of type line.

figure contains 4 axes objects. axes object 1 with title thrust u(1) contains an object of type stair. axes object 2 with title thrust u(2) contains an object of type stair. axes object 3 with title thrust u(3) contains an object of type stair. axes object 4 with title thrust u(4) contains an object of type stair.

figure contains an axes object. the axes object with title optimal trajectory, xlabel x, ylabel y contains 62 objects of type patch, line.

create a nonlinear mpc controller with four states, two outputs, and one input.

nlobj = nlmpc(4,2,1);
zero weights are applied to one or more ovs because there are fewer mvs than ovs.

specify the sample time and horizons of the controller.

ts = 0.1;
nlobj.ts = ts;
nlobj.predictionhorizon = 10;
nlobj.controlhorizon = 5;

specify the state function for the controller, which is in the file pendulumdt0.m. this discrete-time model integrates the continuous time model defined in pendulumct0.m using a multistep forward euler method.

nlobj.model.statefcn = "pendulumdt0";
nlobj.model.iscontinuoustime = false;

the prediction model uses an optional parameter, ts, to represent the sample time. specify the number of parameters.

nlobj.model.numberofparameters = 1;

specify the output function of the model, passing the sample time parameter as an input argument.

nlobj.model.outputfcn = @(x,u,ts) [x(1); x(3)];

define standard constraints for the controller.

nlobj.weights.outputvariables = [3 3];
nlobj.weights.manipulatedvariablesrate = 0.1;
nlobj.ov(1).min = -10;
nlobj.ov(1).max = 10;
nlobj.mv.min = -100;
nlobj.mv.max = 100;

validate the prediction model functions.

x0 = [0.1;0.2;-pi/2;0.3];
u0 = 0.4;
validatefcns(nlobj, x0, u0, [], {ts});
model.statefcn is ok.
model.outputfcn is ok.
analysis of user-provided model, cost, and constraint functions complete.

only two of the plant states are measurable. therefore, create an extended kalman filter for estimating the four plant states. its state transition function is defined in pendulumstatefcn.m and its measurement function is defined in pendulummeasurementfcn.m.

ekf = extendedkalmanfilter(@pendulumstatefcn,@pendulummeasurementfcn);

define initial conditions for the simulation, initialize the extended kalman filter state, and specify a zero initial manipulated variable value.

x = [0;0;-pi;0];
y = [x(1);x(3)];
ekf.state = x;
mv = 0;

specify the output reference value.

yref = [0 0];

create an nlmpcmoveopt object, and specify the sample time parameter.

nloptions = nlmpcmoveopt;
nloptions.parameters = {ts};

run the simulation for 10 seconds. during each control interval:

  1. correct the previous prediction using the current measurement.

  2. compute optimal control moves using nlmpcmove. this function returns the computed optimal sequences in nloptions. passing the updated options object to nlmpcmove in the next control interval provides initial guesses for the optimal sequences.

  3. predict the model states.

  4. apply the first computed optimal control move to the plant, updating the plant states.

  5. generate sensor data with white noise.

  6. save the plant states.

duration = 10;
xhistory = x;
for ct = 1:(duration/ts)
    % correct previous prediction
    xk = correct(ekf,y);
    % compute optimal control moves
    [mv,nloptions] = nlmpcmove(nlobj,xk,mv,yref,[],nloptions);
    % predict prediction model states for the next iteration
    predict(ekf,[mv; ts]);
    % implement first optimal control move
    x = pendulumdt0(x,mv,ts);
    % generate sensor data
    y = x([1 3])   randn(2,1)*0.01;
    % save plant states
    xhistory = [xhistory x];
end

plot the resulting state trajectories.

figure
subplot(2,2,1)
plot(0:ts:duration,xhistory(1,:))
xlabel('time')
ylabel('z')
title('cart position')
subplot(2,2,2)
plot(0:ts:duration,xhistory(2,:))
xlabel('time')
ylabel('zdot')
title('cart velocity')
subplot(2,2,3)
plot(0:ts:duration,xhistory(3,:))
xlabel('time')
ylabel('theta')
title('pendulum angle')
subplot(2,2,4)
plot(0:ts:duration,xhistory(4,:))
xlabel('time')
ylabel('thetadot')
title('pendulum velocity')

figure contains 4 axes objects. axes object 1 with title cart position, xlabel time, ylabel z contains an object of type line. axes object 2 with title cart velocity, xlabel time, ylabel zdot contains an object of type line. axes object 3 with title pendulum angle, xlabel time, ylabel theta contains an object of type line. axes object 4 with title pendulum velocity, xlabel time, ylabel thetadot contains an object of type line.

version history

introduced in r2018b

see also

functions

  • | | |

objects

  • |

blocks

网站地图