nonlinear model predictive controller -凯发k8网页登录
nonlinear model predictive controller
description
a nonlinear model predictive controller computes optimal control moves across the prediction horizon using a nonlinear prediction model, a nonlinear cost function, and nonlinear constraints. for more information on nonlinear mpc, see .
creation
syntax
description
input arguments
nx
— number of prediction model states
positive integer
number of prediction model states, specified as a positive integer. this value is
stored in the dimensions.numberofstates
controller read-only
property. you cannot change the number of states after creating the controller
object.
example: 6
ny
— number of prediction model outputs
positive integer
number of prediction model outputs, specified as a positive integer. this value is
stored in the dimensions.numberofoutputs
controller read-only
property. you cannot change the number of outputs after creating the controller
object.
example: 2
nu
— number of prediction model inputs
positive integer
number of prediction model inputs, which are all set to be manipulated variables,
specified as a positive integer. this value is stored in the
dimensions.numberofinputs
controller read-only property. you
cannot change the number of manipulated variables after creating the controller
object.
example: 4
mvindex
— manipulated variable indices
vector of positive integers
manipulated variable indices, specified as a vector of positive integers. this
value is stored in the dimensions.mvindex
controller read-only
property. you cannot change these indices after creating the controller object.
the combined set of indices from mvindex
,
mdindex
, and udindex
must contain all
integers from 1
through
nu, where
nu is the number of prediction model
inputs.
example: [1 3]
mdindex
— measured disturbance indices
vector of positive integers
measured disturbance indices, specified as a vector of positive integers. this
value is stored in the dimensions.mdindex
controller read-only
property. you cannot change these indices after creating the controller object.
the combined set of indices from mvindex
,
mdindex
, and udindex
must contain all
integers from 1
through
nu, where
nu is the number of prediction model
inputs.
example: 2
udindex
— unmeasured disturbance indices
vector of positive integers
unmeasured disturbance indices, specified as a vector of positive integers. this
value is stored in the dimensions.udindex
controller read-only
property. you cannot change these indices after creating the controller object.
the combined set of indices from mvindex
,
mdindex
, and udindex
must contain all
integers from 1
through
nu, where
nu is the number of prediction model
inputs.
example: 4
properties
ts
— prediction model sample time
1
(default) | positive finite scalar
prediction model sample time, specified as a positive finite scalar. the controller
uses a discrete-time model with a sample time of ts
for prediction.
if you specify a continuous-time prediction model
(model.iscontinuoustime
is true
), then the
controller discretizes the model using the built-in implicit trapezoidal rule with a
sample time of ts
.
example: 0.1
predictionhorizon
— prediction horizon
10
(default) | positive integer
prediction horizon steps, specified as a positive integer. the product of
predictionhorizon
and ts
is the prediction
time, that is, how far the controller looks into the future.
example: 15
controlhorizon
— control horizon
2
(default) | positive integer | vector of positive integers
control horizon, specified as one of the following:
positive integer, m, between
1
and p, inclusive, where p is equal topredictionhorizon
. in this case, the controller computes m free control moves occurring at times k through k m–1, and holds the controller output constant for the remaining prediction horizon steps from k m through k p–1. here, k is the current control interval.vector of positive integers [m1, m2, …], specifying the lengths of blocking intervals. by default the controller computes m blocks of free moves, where m is the number of blocking intervals. the first free move applies to times k through k m1–1, the second free move applies from time k m1 through k m1 m2–1, and so on. using block moves can improve the robustness of your controller. the sum of the values in
controlhorizon
must match the prediction horizon p. if you specify a vector whose sum is:less than the prediction horizon, then the controller adds a blocking interval. the length of this interval is such that the sum of the interval lengths is p. for example, if p=
10
and you specify a control horizon ofcontrolhorizon
=[1 2 3]
, then the controller uses four intervals with lengths[1 2 3 4]
.greater than the prediction horizon, then the intervals are truncated until the sum of the interval lengths is equal to p. for example, if p=
10
and you specify a control horizon ofcontrolhorizon
=[1 2 3 6 7]
, then the controller uses four intervals with lengths[1 2 3 4]
.
piecewise constant blocking moves are often too restrictive for optimal path
planning applications. to produce a less-restrictive, better-conditioned nonlinear
programming problem, you can specify piecewise linear manipulated variable blocking
intervals. to do so, set the optimization.mvinterpolationorder
property of your nlmpc
controller object to
1
.
for more information on how manipulated variable blocking works with different interpolation methods, see .
example: 3
dimensions
— prediction model dimensional information
structure
this property is read-only.
prediction model dimensional information, specified when you create the controller and stored as a structure with the following fields.
numberofstates
— number of states
positive integer
number of states in the prediction model, specified as a positive integer.
this value corresponds to nx
.
example: 6
numberofoutputs
— number of outputs
positive integer
number of outputs in the prediction model, specified as a positive integer.
this value corresponds to ny
.
example: 1
numberofinputs
— number of inputs
positive integer
number of inputs in the prediction model, specified as a positive integer.
this value corresponds to either nu
or the sum of the lengths
of mvindex
, mdindex
, and
udindex
.
example: 3
mvindex
— manipulated variable indices
vector of positive integers
manipulated variable indices for the prediction model, specified as a vector
of positive integers. this value corresponds to
mvindex
.
example: [1 2]
mdindex
— measured disturbance indices
vector of positive integers
measured disturbance indices for the prediction model, specified as a vector
of positive integers. this value corresponds to
mdindex
.
example: 4
udindex
— unmeasured disturbance indices
vector of positive integers
unmeasured disturbance indices for the prediction model, specified as a vector
of positive integers. this value corresponds to
udindex
.
example: 3
model
— prediction model
structure
prediction model, specified as a structure with the following fields.
statefcn
— state function
string | character vector | function handle
state function, specified as a string, character vector, or function handle.
for a continuous-time prediction model, statefcn
is the state
derivative function. for a discrete-time prediction model,
statefcn
is the state update function.
if your state function is continuous-time, the controller automatically
discretizes the model using the implicit trapezoidal rule. this method can handle
moderately stiff models, and its prediction accuracy depends on the controller
sample time ts
; that is, a large sample time leads to
inaccurate prediction.
if the default discretization method does not provide satisfactory prediction for your application, you can specify your own discrete-time prediction model that uses a different method, such as the multistep forward euler rule.
you can specify your state function in one of the following ways:
name of a function in the current working folder or on the matlab® path, specified as a string or character vector
model.statefcn = "mystatefunction";
handle to a local function, or a function defined in the current working folder or on the matlab path
model.statefcn = @mystatefunction;
for more information on local functions, see .
anonymous function
model.statefcn = @(x,u,params) mystatefunction(x,u,params)
for more information on anonymous functions, see .
note
only functions defined in a separate file in the current folder or on the matlab path are supported for c/c code generation. therefore, specifying state, output, cost, or constraint functions (or their jacobians) as local or anonymous functions is not recommended.
for more information, see .
example: "@transfcn"
outputfcn
— output function
[]
(default) | string | character vector | function handle
output function, specified as a string, character vector, or function handle.
if the number of states and outputs of the prediction model are the same, you can
omit outputfcn
, which implies that all states are measurable;
that is, each output corresponds to one state.
note
your output function cannot have direct feedthrough from any manipulated variable to any output at any time.
you can specify your output function in one of the following ways:
name of a function in the current working folder or on the matlab path, specified as a string or character vector
model.outputfcn = "myoutputfunction";
handle to a local function, or a function defined in the current working folder or on the matlab path
model.outputfcn = @myoutputfunction;
for more information on local functions, see .
anonymous function
model.outputfcn = @(x,u,params) myoutputfunction(x,u,params)
for more information on anonymous functions, see .
note
only functions defined in a separate file in the current folder or on the matlab path are supported for c/c code generation. therefore, specifying state, output, cost, or constraint functions (or their jacobians) as local or anonymous functions is not recommended.
for more information, see .
example: "@outfcn"
iscontinuoustime
— option to indicate prediction model time domain
true
(default) | false
option to indicate prediction model time domain, specified as one of the following:
true
— continuous-time prediction model. in this case, the controller automatically discretizes the model during prediction usingts
.false
— discrete-time prediction model. in this case,ts
is the sample time of the model.
note
iscontinuoustime
must be consistent with the functions
specified in model.statefcn
and
model.outputfcn
.
if iscontinuoustime
is true
,
statefcn
must return the derivative of the state with
respect to time, at the current time. otherwise statefcn
must return the state at the next control interval.
example: true
numberofparameters
— number of optional model parameters
0
(default) | nonnegative integer
number of optional model parameters used by the prediction model, custom cost
function, custom constraints, passivity functions, specified as a nonnegative
integer. the number of parameters includes all the parameters used by these
functions. for example, if the state function uses only parameter
p1
, the constraint functions use only parameter
p2
, and the cost function uses only parameter
p3
, then numberofparameters
is
3
.
example: 1
states
— state information, bounds, and scale factors
structure array
state information, bounds, and scale factors, specified as a structure array with nx elements, where nx is the number of states. each structure element has the following fields.
min
— state lower bound
-inf
(default) | scalar | vector
state lower bound, specified as a scalar or vector. by default, this lower
bound is -inf
.
to use the same bound across the prediction horizon, specify a scalar value.
to vary the bound over the prediction horizon from time k 1 to time k p, specify a vector of up to p values. here, k is the current time and p is the prediction horizon. if you specify fewer than p values, the final bound is used for the remaining steps of the prediction horizon.
state bounds are always hard constraints.
example: [-20 -18 -15]
max
— state upper bound
inf
(default) | scalar | vector
state upper bound, specified as a scalar or vector. by default, this upper
bound is inf
.
to use the same bound across the prediction horizon, specify a scalar value.
to vary the bound over the prediction horizon from time k 1 to time k p, specify a vector of up to p values. here, k is the current time and p is the prediction horizon. if you specify fewer than p values, the final bound is used for the remaining steps of the prediction horizon.
state bounds are always hard constraints.
example: [20 15]
name
— state name
string | character vector
state name, specified as a string or character vector. the default state name
is "x#"
, where #
is its state index.
example: "speed"
units
— state units
""
(default) | string | character vector
state units, specified as a string or character vector.
example: "m/s"
scalefactor
— state scale factor
1
(default) | positive finite scalar
state scale factor, specified as a positive finite scalar. in general, use the operating range of the state. specifying the proper scale factor can improve numerical conditioning for optimization.
example: 10
outputvariables
— output variable information, bounds, and scale factors
structure array
output variable (ov) information, bounds, and scale factors, specified as a
structure array with ny elements, where
ny is the number of output variables. to
access this property, you can use the alias ov
instead of
outputvariables
.
each structure element has the following fields.
min
— ov lower bound
-inf
(default) | scalar | vector
ov lower bound, specified as a scalar or vector. by default, this lower bound
is -inf
.
to use the same bound across the prediction horizon, specify a scalar value.
to vary the bound over the prediction horizon from time k 1 to time k p, specify a vector of up to p values. here, k is the current time and p is the prediction horizon. if you specify fewer than p values, the final bound is used for the remaining steps of the prediction horizon.
example: [-10 -8]
max
— ov upper bound
inf
(default) | scalar | vector
ov upper bound, specified as a scalar or vector. by default, this upper bound
is inf
.
to use the same bound across the prediction horizon, specify a scalar value.
to vary the bound over the prediction horizon from time k 1 to time k p, specify a vector of up to p values. here, k is the current time and p is the prediction horizon. if you specify fewer than p values, the final bound is used for the remaining steps of the prediction horizon.
example: [12 10 8]
minecr
— ov lower bound softness
1
(default) | nonnegative finite scalar | vector
ov lower bound softness, where a larger ecr value indicates a softer constraint, specified as a nonnegative finite scalar or vector. by default, ov upper bounds are soft constraints.
to use the same ecr value across the prediction horizon, specify a scalar value.
to vary the ecr value over the prediction horizon from time k 1 to time k p, specify a vector of up to p values. here, k is the current time and p is the prediction horizon. if you specify fewer than p values, the final ecr value is used for the remaining steps of the prediction horizon.
example: [2 1 0.5]
maxecr
— ov upper bound softness
1
(default) | nonnegative finite scalar | vector
ov upper bound softness, where a larger ecr value indicates a softer constraint, specified as a nonnegative finite scalar or vector. by default, ov lower bounds are soft constraints.
to use the same ecr value across the prediction horizon, specify a scalar value.
to vary the ecr value over the prediction horizon from time k 1 to time k p, specify a vector of up to p values. here, k is the current time and p is the prediction horizon. if you specify fewer than p values, the final ecr value is used for the remaining steps of the prediction horizon.
example: [5 2 1]
name
— ov name
string | character vector
ov name, specified as a string or character vector. the default ov name is
"y#"
, where #
is its output index.
example: "attack angle"
units
— ov units
""
(default) | string | character vector
ov units, specified as a string or character vector.
example: "degrees"
scalefactor
— ov scale factor
1
(default) | positive finite scalar
ov scale factor, specified as a positive finite scalar. in general, use the operating range of the output variable. specifying the proper scale factor can improve numerical conditioning for optimization.
example: 90
manipulatedvariables
— manipulated variable information, bounds, and scale factors
structure array
manipulated variable (mv) information, bounds, and scale factors, specified as a
structure array with nmv elements, where
nmv is the number of manipulated
variables. to access this property, you can use the alias mv
instead
of manipulatedvariables
.
each structure element has the following fields.
min
— mv lower bound
-inf
(default) | scalar | vector
mv lower bound, specified as a scalar or vector. by default, this lower bound
is -inf
.
to use the same bound across the prediction horizon, specify a scalar value.
to vary the bound over the prediction horizon from time k to time k p–1, specify a vector of up to p values. here, k is the current time and p is the prediction horizon. if you specify fewer than p values, the final bound is used for the remaining steps of the prediction horizon.
example: [-1.1 -1]
max
— mv upper bound
inf
(default) | scalar | vector
mv upper bound, specified as a scalar or vector. by default, this upper bound
is inf
.
to use the same bound across the prediction horizon, specify a scalar value.
to vary the bound over the prediction horizon from time k to time k p–1, specify a vector of up to p values. here, k is the current time and p is the prediction horizon. if you specify fewer than p values, the final bound is used for the remaining steps of the prediction horizon.
example: [1.2 1]
minecr
— mv lower bound softness
0
(default) | nonnegative scalar | vector
mv lower bound softness, where a larger ecr value indicates a softer constraint, specified as a nonnegative scalar or vector. by default, mv lower bounds are hard constraints.
to use the same ecr value across the prediction horizon, specify a scalar value.
to vary the ecr value over the prediction horizon from time k to time k p–1, specify a vector of up to p values. here, k is the current time and p is the prediction horizon. if you specify fewer than p values, the final ecr value is used for the remaining steps of the prediction horizon.
example: [0.1 0]
maxecr
— mv upper bound
0
(default) | nonnegative scalar | vector
mv upper bound softness, where a larger ecr value indicates a softer constraint, specified as a nonnegative scalar or vector. by default, mv upper bounds are hard constraints.
to use the same ecr value across the prediction horizon, specify a scalar value.
to vary the ecr value over the prediction horizon from time k to time k p–1, specify a vector of up to p values. here, k is the current time and p is the prediction horizon. if you specify fewer than p values, the final ecr value is used for the remaining steps of the prediction horizon.
example: [0.5 0.2]
ratemin
— mv rate of change lower bound
-inf
(default) | nonpositive scalar | vector
mv rate of change lower bound, specified as a nonpositive scalar or vector.
the mv rate of change is defined as mv(k) -
mv(k–1), where k is the
current time. by default, this lower bound is -inf
.
to use the same bound across the prediction horizon, specify a scalar value.
to vary the bound over the prediction horizon from time k to time k p–1, specify a vector of up to p values. here, k is the current time and p is the prediction horizon. if you specify fewer than p values, the final bound is used for the remaining steps of the prediction horizon.
example: [-50 -20]
ratemax
— mv rate of change upper bound
inf
(default) | nonnegative scalar | vector
mv rate of change upper bound, specified as a nonnegative scalar or vector.
the mv rate of change is defined as mv(k) -
mv(k–1), where k is the
current time. by default, this upper bound is inf
.
to use the same bound across the prediction horizon, specify a scalar value.
to vary the bound over the prediction horizon from time k to time k p–1, specify a vector of up to p values. here, k is the current time and p is the prediction horizon. if you specify fewer than p values, the final bound is used for the remaining steps of the prediction horizon.
example: [50 20]
rateminecr
— mv rate of change lower bound softness
0
(default) | nonnegative finite scalar | vector
mv rate of change lower bound softness, where a larger ecr value indicates a softer constraint, specified as a nonnegative finite scalar or vector. by default, mv rate of change lower bounds are hard constraints.
to use the same ecr value across the prediction horizon, specify a scalar value.
to vary the ecr values over the prediction horizon from time k to time k p–1, specify a vector of up to p values. here, k is the current time and p is the prediction horizon. if you specify fewer than p values, the final ecr values are used for the remaining steps of the prediction horizon.
example: [0.1 0]
ratemaxecr
— mv rate of change upper bound softness
0
(default) | nonnegative finite scalar | vector
mv rate of change upper bound softness, where a larger ecr value indicates a softer constraint, specified as a nonnegative finite scalar or vector. by default, mv rate of change upper bounds are hard constraints.
to use the same ecr value across the prediction horizon, specify a scalar value.
to vary the ecr values over the prediction horizon from time k to time k p–1, specify a vector of up to p values. here, k is the current time and p is the prediction horizon. if you specify fewer than p values, the final ecr values are used for the remaining steps of the prediction horizon.
example: [1 0.5 0.2]
name
— mv name
string | character vector
mv name, specified as a string or character vector. the default mv name is
"u#"
, where #
is its input index.
example: "rudder angle"
units
— mv units
""
(default) | string | character vector
mv units, specified as a string or character vector.
example: "degrees"
scalefactor
— mv scale factor
1
(default) | positive finite scalar
mv scale factor, specified as a positive finite scalar. in general, use the operating range of the manipulated variable. specifying the proper scale factor can improve numerical conditioning for optimization.
example: 60
measureddisturbances
— measured disturbance information and scale factors
structure array
measured disturbance (md) information and scale factors, specified as a structure
array with nmd elements, where
nmd is the number of measured
disturbances. if your model does not have measured disturbances, then
measureddisturbances
is []
. to access this
property, you can use the alias md
instead of
measureddisturbances
.
each structure element has the following fields.
name
— md name
string | character vector
md name, specified as a string or character vector. the default md name is
"u#"
, where #
is its input index.
example: "wind speed"
units
— md units
""
(default) | string | character vector
md units, specified as a string or character vector.
example: "m/s"
scalefactor
— md scale factor
1
(default) | positive finite scalar
md scale factor, specified as a positive finite scalar. in general, use the operating range of the disturbance. specifying the proper scale factor can improve numerical conditioning for optimization.
example: 10
weights
— standard cost function tuning weights
structure
standard cost function tuning weights, specified as a structure. the controller applies these weights to the scaled variables. therefore, the tuning weights are dimensionless values.
note
if you define a custom cost function using
optimization.customcostfcn
and set
optimization.replacestandardcost
to true
, then
the controller ignores the standard cost function tuning weights in
weights
.
weights
has the following fields.
manipulatedvariables
— manipulated variable tuning weights
row vector | array
manipulated variable tuning weights, which penalize deviations from mv
targets, specified as a row vector or array of nonnegative values. the default
weight for all manipulated variables is 0
.
to use the same weights across the prediction horizon, specify a row vector of length nmv, where nmv is the number of manipulated variables.
to vary the tuning weights over the prediction horizon from time k to time k p-1, specify an array with nmv columns and up to p rows. here, k is the current time and p is the prediction horizon. each row contains the manipulated variable tuning weights for one prediction horizon step. if you specify fewer than p rows, the weights in the final row are used for the remaining steps of the prediction horizon.
to specify mv targets at run time, in simulink®, pass the target values to the block.
in matlab, pass the target values to a simulation function (such as , using the mvtarget
property of an
object).
example: [0.1 0.2]
manipulatedvariablesrate
— manipulated variable rate tuning weights
row vector | array
manipulated variable rate tuning weights, which penalize large changes in
control moves, specified as a row vector or array of nonnegative values. the
default weight for all manipulated variable rates is
0.1
.
to use the same weights across the prediction horizon, specify a row vector of length nmv, where nmv is the number of manipulated variables.
to vary the tuning weights over the prediction horizon from time k to time k p-1, specify an array with nmv columns and up to p rows. here, k is the current time and p is the prediction horizon. each row contains the manipulated variable rate tuning weights for one prediction horizon step. if you specify fewer than p rows, the weights in the final row are used for the remaining steps of the prediction horizon.
example: [0.1 0.1]
outputvariables
— output variable tuning weights
vector | array
output variable tuning weights, which penalize deviation from output
references, specified as a row vector or array of nonnegative values. the default
weight for all output variables is 1
.
to use the same weights across the prediction horizon, specify a row vector of length ny, where ny is the number of output variables.
to vary the tuning weights over the prediction horizon from time k 1 to time k p, specify an array with ny columns and up to p rows. here, k is the current time and p is the prediction horizon. each row contains the output variable tuning weights for one prediction horizon step. if you specify fewer than p rows, the weights in the final row are used for the remaining steps of the prediction horizon.
example: [0.1 0.1]
ecr
— slack variable tuning weight
1e5
(default) | positive scalar
slack variable tuning weight, specified as a positive scalar.
example: 1e4
optimization
— custom optimization functions and solver
structure
custom optimization functions and solver, specified as a structure with the following fields.
customcostfcn
— custom cost function
[]
| string | character vector | function handle
custom cost function, specified as one of the following:
name of a function in the current working folder or on the matlab path, specified as a string or character vector
optimization.customcostfcn = "mycostfunction";
handle to a local function, or a function defined in the current working folder or on the matlab path
optimization.customcostfcn = @mycostfunction;
for more information on local functions, see .
anonymous function
optimization.customcostfcn = @(x,u,e,data,params) mycostfunction(x,u,e,data,params);
for more information on anonymous functions, see .
note
only functions defined in a separate file in the current folder or on the matlab path are supported for c/c code generation. therefore, specifying state, output, cost, or constraint functions (or their jacobians) as local or anonymous functions is not recommended.
your cost function must have the signature:
function j = mycostfunction(x,u,e,data,params)
for more information, see .
example: @costfcn
replacestandardcost
— option to replace the standard cost function
true
(default) | false
option to replace the standard cost function with the custom cost function, specified as one of the following:
true
— the controller uses the custom cost alone as the objective function during optimization. in this case, theweights
property of the controller is ignored.false
— the controller uses the sum of the standard cost and custom cost as the objective function during optimization.
if you do not specify a custom cost function using
customcostfcn
, then the controller ignores
repalcestandardcost
.
for more information, see .
example: true
customeqconfcn
— custom equality constraint function
[]
(default) | string | character vector | function handle
custom equality constraint function, specified as one of the following:
name of a function in the current working folder or on the matlab path, specified as a string or character vector
optimization.customeqconfcn = "myeqconfunction";
handle to a local function, or a function defined in the current working folder or on the matlab path
optimization.customeqconfcn = @myeqconfunction;
for more information on local functions, see .
anonymous function
optimization.customeqconfcn = @(x,u,data,params) myeqconfunction(x,u,data,params);
for more information on anonymous functions, see .
note
only functions defined in a separate file in the current folder or on the matlab path are supported for c/c code generation. therefore, specifying state, output, cost, or constraint functions (or their jacobians) as local or anonymous functions is not recommended.
your equality constraint function must have the signature:
function ceq = myeqconfunction(x,u,data,params)
for more information, see .
example: @eqfcn
customineqconfcn
— custom inequality constraint function
[]
(default) | string | character vector | function handle
custom inequality constraint function, specified as one of the following:
name of a function in the current working folder or on the matlab path, specified as a string or character vector
optimization.customineqconfcn = "myineqconfunction";
handle to a local function, or a function defined in the current working folder or on the matlab path
optimization.customineqconfcn = @myineqconfunction;
for more information on local functions, see .
anonymous function
optimization.customineqconfcn = @(x,u,e,data,params) myineqconfunction(x,u,e,data,params);
for more information on anonymous functions, see .
note
only functions defined in a separate file in the current folder or on the matlab path are supported for c/c code generation. therefore, specifying state, output, cost, or constraint functions (or their jacobians) as local or anonymous functions is not recommended.
your equality constraint function must have the signature:
function cineq = myineqconfunction(x,u,e,data,params)
for more information, see .
example: @ineqfcn
customsolverfcn
— custom nonlinear programming solver
[]
(default) | string | character vector | function handle
custom nonlinear programming solver function, specified as a string, character vector, or function handle. if you do not have optimization toolbox™ software, you must specify your own custom nonlinear programming solver. you can specify your custom solver function in one of the following ways:
name of a function in the current working folder or on the matlab path, specified as a string or character vector
optimization.customsolverfcn = "mynlpsolver";
handle to a local function, or a function defined in the current working folder or on the matlab path
optimization.customsolverfcn = @mynlpsolver;
for more information, see configure optimization solver for nonlinear mpc.
example: @mysolver
solveroptions
— solver options
options object for fmincon
| []
solver options, specified as an options object for
fmincon
or []
.
if you have optimization toolbox software, solveroptions
contains an options
object for the fmincon
solver.
if you do not have optimization toolbox, solveroptions
is []
.
for more information, see configure optimization solver for nonlinear mpc.
runaslinearmpc
— option to simulate as a linear controller
"off"
(default) | "adaptive"
| "timevarying"
option to simulate as a linear controller, specified as one of the following:
"off"
— simulate the controller as a nonlinear controller with a nonlinear prediction model."adaptive"
— for each control interval, a linear model is obtained from the specified nonlinear state and output functions at the current operating point and used across the prediction horizon. to determine if an adaptive mpc controller provides comparable performance to the nonlinear controller, use this option. for more information on adaptive mpc, see ."timevarying"
— for each control interval, p linear models are obtained from the specified nonlinear state and output functions at the p operating points predicted from the previous interval, one for each prediction horizon step. to determine if a linear time-varying mpc controller provides comparable performance to the nonlinear controller, use this option. for more information on time-varying mpc, see .
to use the either the "adaptive"
or
"timevarying"
option, your controller must have no custom
constraints and no custom cost function.
for an example that simulates a nonlinear mpc controller as a linear controller, see .
example: "adaptive"
usesuboptimalsolution
— option to accept a suboptimal solution
false
(default) | true
option to accept a suboptimal solution, specified as a logical value. when the
nonlinear programming solver reaches the maximum number of iterations without
finding a solution (the exit flag is 0
), the controller:
freezes the mv values if
usesuboptimalsolution
isfalse
applies the suboptimal solution found by the solver after the final iteration if
usesuboptimalsolution
istrue
to specify the maximum number of iterations, use
optimization.solveroptions.maxiter
.
example: true
mvinterpolationorder
— linear interpolation order used for block moves
0
(default) | 1
linear interpolation order used by block moves, specified as one of the following:
0
— use piecewise constant manipulated variable intervals.1
— use piecewise linear manipulated variable intervals.
if the control horizon is a scalar, then the controller ignores
mvinterpolationorder
.
for more information on manipulated variable blocking, see .
example: 1
jacobian
— jacobians of model functions, and custom cost and constraint functions
structure
jacobians of model functions, and custom cost and constraint functions, specified as a structure. as a best practice, use jacobians whenever they are available, since they improve optimization efficiency. if you do not specify a jacobian for a given function, the nonlinear programming solver must numerically compute the jacobian.
the jacobian
structure contains the following fields.
statefcn
— jacobian of state function
[]
(default) | string | character vector | function handle
jacobian of state function z
from
model.statefcn
, specified as one of the following
name of a function in the current working folder or on the matlab path, specified as a string or character vector
model.statefcn = "mystatejacobian";
handle to a local function, or a function defined in the current working folder or on the matlab path
model.statefcn = @mystatejacobian;
for more information on local functions, see .
anonymous function
model.statefcn = @(x,u,params) mystatejacobian(x,u,params)
for more information on anonymous functions, see .
note
only functions defined in a separate file in the current folder or on the matlab path are supported for c/c code generation. therefore, specifying state, output, cost, or constraint functions (or their jacobians) as local or anonymous functions is not recommended.
for more information, see .
example: @afcn
outputfcn
— jacobian of output function
[]
(default) | string | character vector | function handle
jacobian of output function y
from
model.outputfcn
, specified as one of the following:
name of a function in the current working folder or on the matlab path, specified as a string or character vector
model.statefcn = "myoutputjacobian";
handle to a local function, or a function defined in the current working folder or on the matlab path
model.statefcn = @myoutputjacobian;
for more information on local functions, see .
anonymous function
model.statefcn = @(x,u,params) myoutputjacobian(x,u,params)
note
only functions defined in a separate file in the current folder or on the matlab path are supported for c/c code generation. therefore, specifying state, output, cost, or constraint functions (or their jacobians) as local or anonymous functions is not recommended.
for more information, see .
example: @cfcn
customcostfcn
— jacobian of custom cost function
[]
| string | character vector | function handle
jacobian of custom cost function j
from
optimization.customcostfcn
, specified as one of the
following:
name of a function in the current working folder or on the matlab path, specified as a string or character vector
jacobian.customcostfcn = "mycostjacobian";
handle to a local function, or a function defined in the current working folder or on the matlab path
jacobian.customcostfcn = @mycostjacobian;
for more information on local functions, see .
anonymous function
jacobian.customcostfcn = @(x,u,e,data,params) mycostjacobian(x,u,e,data,params)
for more information on anonymous functions, see .
note
only functions defined in a separate file in the current folder or on the matlab path are supported for c/c code generation. therefore, specifying state, output, cost, or constraint functions (or their jacobians) as local or anonymous functions is not recommended.
your cost jacobian function must have the signature:
function [g,gmv,ge] = mycostjacobian(x,u,e,data,params)
for more information, see .
example: @costjacfcn
customeqconfcn
— jacobian of custom equality constraints
[]
(default) | string | character vector | function handle
jacobian of custom equality constraints ceq
from
optimization.customeqconfcn
, specified as one of the
following:
name of a function in the current working folder or on the matlab path, specified as a string or character vector
jacobian.customeqconfcn = "myeqconjacobian";
handle to a local function, or a function defined in the current working folder or on the matlab path
jacobian.customeqconfcn = @myeqconjacobian;
for more information on local functions, see .
anonymous function
jacobian.customeqconfcn = @(x,u,data,params) myeqconjacobian(x,u,data,params);
for more information on anonymous functions, see .
note
only functions defined in a separate file in the current folder or on the matlab path are supported for c/c code generation. therefore, specifying state, output, cost, or constraint functions (or their jacobians) as local or anonymous functions is not recommended.
your equality constraint jacobian function must have the signature:
function [g,gmv] = myeqconjacobian(x,u,data,params)
for more information, see .
example: @eqjacfcn
customineqconfcn
— jacobian of custom inequality constraints
[]
(default) | string | character vector | function handle
jacobian of custom inequality constraints c
from
optimization.customineqconfcn
, specified as one of the
following:
name of a function in the current working folder or on the matlab path, specified as a string or character vector
jacobian.customeqconfcn = "myineqconjacobian";
handle to a local function, or a function defined in the current working folder or on the matlab path
jacobian.customeqconfcn = @myineqconjacobian;
for more information on local functions, see .
anonymous function
jacobian.customeqconfcn = @(x,u,data,params) myineqconjacobian(x,u,data,params);
for more information on anonymous functions, see .
note
only functions defined in a separate file in the current folder or on the matlab path are supported for c/c code generation. therefore, specifying state, output, cost, or constraint functions (or their jacobians) as local or anonymous functions is not recommended.
your inequality constraint jacobian function must have the signature:
function [g,gmv,ge] = myineqconjacobian(x,u,data,params)
for more information, see .
example: @ineqjacfcn
passivity
— passivity constraints
structure
passivity constraints, specified as a structure with the following fields.
when your nonlinear mpc controller is configured to use passivity constraints, at each step the optimization algorithm tries to enforce the inequality constraints:
.
here, νy is the output passivity index, νu is the input passivity index, up(x,u) is the passivity input function, and yp(x,u) is the passivity output function. the variables x and u are the current state and input of the prediction model.
assuming that the plant is already passive with respect to the input-output pair up and yp, if these two inequalities are verified, then (under mild conditions) the resulting closed loop system tends to dissipate energy over time, and therefore has a stable equilibrium. for more information on passivity see and, in the context of linear systems, . for examples, see and .
enforceconstraint
— option to enforce constraints
false
(default) | true
option to enforce constraints, specified as one of the following:
true
— passivity constraints are enforced during optimization. in this case, you must specify theoutputfcn
andinputfcn
properties.false
— passivity constraints are not enforced during optimization.
example: true
outputpassivityindex
— desired output passivity index for controller
0.1
(default) | nonnegative scalar
desired output passivity index for the controller, specified as a nonnegative scalar.
if passivity.enforceconstraint
is
true
, at each step the optimization algorithm tries to enforce
the passivity inequality constraint, which involves the passivity index
νy specified in
passivity.oututpassivityindex
.
example: 1
inputpassivityindex
— desired input passivity index for controller
0
(default) | nonnegative scalar
desired output passivity index for the controller, specified as a nonnegative scalar.
if passivity.enforceconstraint
is
true
, at each step the optimization algorithm tries to enforce
the passivity inequality constraint, which involves the passivity index
νu specified in
passivity.inputpassivityindex
.
example: 1
outputfcn
— passivity output function
[]
(default) | string | character vector | function handle
passivity output function, specified as a string, character vector, or function handle.
if passivity.enforceconstraint
is true
then at each step the optimization algorithm tries to enforce the input and output
inequality constraints, which involve the function yp(x,u) specified in passivity.outputfcn
.
you can specify your passivity output function as one of the following:
name of a function in the current working folder or on the matlab path, specified as a string or character vector
passivity.outputfcn = "mypassivityoutputfcn";
handle to a local function, or a function defined in the current working folder or on the matlab path
passivity.outputfcn = @mypassivityoutputfcn;
for more information on local functions, see .
anonymous function
passivity.outputfcn = @(x,u,params) mypassivityoutputfcn(x,u,params)
for more information on anonymous functions, see .
note
only functions defined in a separate file in the current folder or on the matlab path are supported for c/c code generation. therefore, specifying state, output, cost, or constraint functions (or their jacobians) as local or anonymous functions is not recommended.
here, x
and u
are the prediction model
states and inputs, respectively, and params
is an optional
comma separated list of parameters (for example p1,p2,p3
) that
might be needed by the function you specify. if any of your functions use optional
parameters, you must specify the number of parameters using
model.numberofparameters
. at run time, in simulink, you then pass these parameters to the block.
in matlab, you pass the parameters to a simulation function (such as , using an option set object).
example: @ypfcn
inputfcn
— passivity input function
string | character vector | function handle
passivity output function, specified as a string, character vector, or
function handle. if passivity.enforceconstraint
is
true
then at each step the optimization algorithm tries to
enforce the input and output inequality constraints, which involve the function up(x,u) specified in passivity.outputfcn
.
you can specify your passivity input function as one of the following:
name of a function in the current working folder or on the matlab path, specified as a string or character vector
passivity.inputfcn = "mypassivityinputfcn";
handle to a local function, or a function defined in the current working folder or on the matlab path
passivity.inputfcn = @mypassivityinputfcn;
for more information on local functions, see .
anonymous function
passivity.inputfcn = @(x,u,params) mypassivityinputfcn(x,u,params)
for more information on anonymous functions, see .
note
only functions defined in a separate file in the current folder or on the matlab path are supported for c/c code generation. therefore, specifying state, output, cost, or constraint functions (or their jacobians) as local or anonymous functions is not recommended.
here, x
and u
are the prediction model
states and inputs, respectively, and params
is an optional
comma separated list of parameters (for example p1,p2,p3
) that
might be needed by the function you specify. if any of your functions use optional
parameters, you must specify the number of parameters using
model.numberofparameters
. at run time, in simulink, you then pass these parameters to the block.
in matlab, you pass the parameters to a simulation function (such as , using an option set object).
example: @upfcn
outputjacobianfcn
— jacobian of passivity output function
[]
(default) | string | character vector | function handle
jacobian of the passivity output function
passivity.outputfcn
, specified as one of the
following:
name of a function in the current working folder or on the matlab path, specified as a string or character vector
passivity.outputjacobianfcn = "mypsvoutjacfcn";
handle to a local function, or a function defined in the current working folder or on the matlab path
passivity.outputjacobianfcn = @mypsvoutjacfcn;
for more information on local functions, see .
anonymous function
passivity.outputjacobianfcn = @(x,u,params) mypsvoutjacfcn(x,u,params)
for more information on anonymous functions, see .
note
only functions defined in a separate file in the current folder or on the matlab path are supported for c/c code generation. therefore, specifying state, output, cost, or constraint functions (or their jacobians) as local or anonymous functions is not recommended.
here, x
and u
are the prediction model
states and outputs, respectively, and params
is an optional
comma separated list (for example p1,p2,p3
) of parameters that
might be needed by the function you specify. if any of your function use optional
parameters, you must specify the number of parameters using
model.numberofparameters
. at run time, in simulink, you then pass these parameters to the block.
in matlab, you pass the parameters to a simulation function (such as , using an ).
the function specified in passivity.outputjacobianfcn
(if
any) must return as a first output argument the jacobian matrix of the output
passivity function with respect to the current state (an
nyp by
nx matrix), and as a second output
argument the jacobian matrix of the output passivity function with respect to the
manipulated variables (an
nyp by
nmv matrix).
here, nx is the number of state variables of the prediction model, nmv is the number of manipulated variables and nyp is the number of outputs of the passivity output function.
example: @ypjac
inputjacobianfcn
— jacobian of passivity input function
[]
(default) | string | character vector | function handle
jacobian of the passivity input function
passivity.inputfcn
, specified as one of the
following
name of a function in the current working folder or on the matlab path, specified as a string or character vector
passivity.inputjacobianfcn = "mypsvinjacfcn";
handle to a local function, or a function defined in the current working folder or on the matlab path
passivity.inputjacobianfcn = @mypsvinjacfcn;
for more information on local functions, see .
anonymous function
passivity.inputjacobianfcn = @(x,u,params) mypsvinjacfcn(x,u,params)
for more information on anonymous functions, see .
note
only functions defined in a separate file in the current folder or on the matlab path are supported for c/c code generation. therefore, specifying state, output, cost, or constraint functions (or their jacobians) as local or anonymous functions is not recommended.
here, x
and u
are the prediction model
states and outputs, respectively, and params
is an optional
comma separated list (for example p1,p2,p3
) of parameters that
might be needed by the function you specify. if any of your function use optional
parameters, you must specify the number of parameters using
model.numberofparameters
. at run time, in simulink, you then pass these parameters to the block.
in matlab, you pass the parameters to a simulation function (such as , using an ).
the function specified in passivity.inputjacobianfcn
(if
any) must return as a first output argument the jacobian of the input passivity
function with respect to the current state (an
nup by
nx matrix), and as a second output
argument the jacobian of the input passivity function with respect to the
manipulated variables (an
nup by
nmv matrix).
here, nx is the number of state variables of the prediction model, nmv is the number of manipulated variables and nup is the number of outputs of the passivity input function.
example: @upfcn
usepredictedstate
— option to use predicted or current state
true
(default) | false
option to use predicted or current state, specified as one of the following:
true
— x[k 1] is a decision variable in the optimization problem.false
— x[k] is a decision variable in the optimization problem.
example: true
object functions
compute optimal control action for nonlinear mpc controller | |
examine prediction model and custom functions of nlmpc or
nlmpcmultistage objects for potential problems | |
convert nlmpc object into one or more mpc
objects | |
create simulink bus object and configure bus creator block for passing model parameters to nonlinear mpc controller block |
examples
create nonlinear mpc controller with discrete-time prediction model
create a nonlinear mpc controller with four states, two outputs, and one input.
nx = 4; ny = 2; nu = 1; nlobj = nlmpc(nx,ny,nu);
zero weights are applied to one or more ovs because there are fewer mvs than ovs.
specify the sample time and horizons of the controller.
ts = 0.1; nlobj.ts = ts; nlobj.predictionhorizon = 10; nlobj.controlhorizon = 5;
specify the state function for the controller, which is in the file pendulumdt0.m
. this discrete-time model integrates the continuous time model defined in pendulumct0.m
using a multistep forward euler method.
nlobj.model.statefcn = "pendulumdt0";
nlobj.model.iscontinuoustime = false;
the discrete-time state function uses an optional parameter, the sample time ts
, to integrate the continuous-time model. therefore, you must specify the number of optional parameters as 1
.
nlobj.model.numberofparameters = 1;
specify the output function for the controller. in this case, define the first and third states as outputs. even though this output function does not use the optional sample time parameter, you must specify the parameter as an input argument (ts
).
nlobj.model.outputfcn = @(x,u,ts) [x(1); x(3)];
validate the prediction model functions for nominal states x0
and nominal inputs u0
. since the prediction model uses a custom parameter, you must pass this parameter to validatefcns
.
x0 = [0.1;0.2;-pi/2;0.3]; u0 = 0.4; validatefcns(nlobj, x0, u0, [], {ts});
model.statefcn is ok. model.outputfcn is ok. analysis of user-provided model, cost, and constraint functions complete.
create nonlinear mpc controller with measured and unmeasured disturbances
this example uses:
create a nonlinear mpc controller with three states, one output, and four inputs. the first two inputs are measured disturbances, the third input is the manipulated variable, and the fourth input is an unmeasured disturbance.
nlobj = nlmpc(3,1,'mv',3,'md',[1 2],'ud',4);
to view the controller state, output, and input dimensions and indices, use the dimensions
property of the controller.
nlobj.dimensions
ans = struct with fields:
numberofstates: 3
numberofoutputs: 1
numberofinputs: 4
mvindex: 3
mdindex: [1 2]
udindex: 4
specify the controller sample time and horizons.
nlobj.ts = 0.5; nlobj.predictionhorizon = 6; nlobj.controlhorizon = 3;
specify the prediction model state function, which is in the file exocstrstatefcnct.m
.
nlobj.model.statefcn = 'exocstrstatefcnct';
specify the prediction model output function, which is in the file exocstroutputfcn.m
.
nlobj.model.outputfcn = 'exocstroutputfcn';
validate the prediction model functions using the initial operating point as the nominal condition for testing and setting the unmeasured disturbance state, x0(3)
, to 0
. since the model has measured disturbances, you must pass them to validatefcns
.
x0 = [311.2639; 8.5698; 0]; u0 = [10; 298.15; 298.15]; validatefcns(nlobj,x0,u0(3),u0(1:2)');
model.statefcn is ok. model.outputfcn is ok. analysis of user-provided model, cost, and constraint functions complete.
validate nonlinear mpc prediction model and custom functions
create nonlinear mpc controller with six states, six outputs, and four inputs.
nx = 6; ny = 6; nu = 4; nlobj = nlmpc(nx,ny,nu);
zero weights are applied to one or more ovs because there are fewer mvs than ovs.
specify the controller sample time and horizons.
ts = 0.4; p = 30; c = 4; nlobj.ts = ts; nlobj.predictionhorizon = p; nlobj.controlhorizon = c;
specify the prediction model state function and the jacobian of the state function. for this example, use a model of a flying robot.
nlobj.model.statefcn = "flyingrobotstatefcn"; nlobj.jacobian.statefcn = "flyingrobotstatejacobianfcn";
specify a custom cost function for the controller that replaces the standard cost function.
nlobj.optimization.customcostfcn = @(x,u,e,data) ts*sum(sum(u(1:p,:))); nlobj.optimization.replacestandardcost = true;
specify a custom constraint function for the controller.
nlobj.optimization.customeqconfcn = @(x,u,data) x(end,:)';
validate the prediction model and custom functions at the initial states (x0
) and initial inputs (u0
) of the robot.
x0 = [-10;-10;pi/2;0;0;0]; u0 = zeros(nu,1); validatefcns(nlobj,x0,u0);
model.statefcn is ok. jacobian.statefcn is ok. no output function specified. assuming "y = x" in the prediction model. optimization.customcostfcn is ok. optimization.customeqconfcn is ok. analysis of user-provided model, cost, and constraint functions complete.
create linear mpc controllers from nonlinear mpc controller
this example uses:
create a nonlinear mpc controller with four states, one output variable, one manipulated variable, and one measured disturbance.
nlobj = nlmpc(4,1,'mv',1,'md',2);
specify the controller sample time and horizons.
nlobj.predictionhorizon = 10; nlobj.controlhorizon = 3;
specify the state function of the prediction model.
nlobj.model.statefcn = 'oxidationstatefcn';
specify the prediction model output function and the output variable scale factor.
nlobj.model.outputfcn = @(x,u) x(3); nlobj.outputvariables.scalefactor = 0.03;
specify the manipulated variable constraints and scale factor.
nlobj.manipulatedvariables.min = 0.0704; nlobj.manipulatedvariables.max = 0.7042; nlobj.manipulatedvariables.scalefactor = 0.6;
specify the measured disturbance scale factor.
nlobj.measureddisturbances.scalefactor = 0.5;
compute the state and input operating conditions for three linear mpc controllers using the fsolve
function.
options = optimoptions('fsolve','display','none'); ulow = [0.38 0.5]; xlow = fsolve(@(x) oxidationstatefcn(x,ulow),[1 0.3 0.03 1],options); umedium = [0.24 0.5]; xmedium = fsolve(@(x) oxidationstatefcn(x,umedium),[1 0.3 0.03 1],options); uhigh = [0.15 0.5]; xhigh = fsolve(@(x) oxidationstatefcn(x,uhigh),[1 0.3 0.03 1],options);
create linear mpc controllers for each of these nominal conditions.
mpcobjlow = converttompc(nlobj,xlow,ulow); mpcobjmedium = converttompc(nlobj,xmedium,umedium); mpcobjhigh = converttompc(nlobj,xhigh,uhigh);
you can also create multiple controllers using arrays of nominal conditions. the number of rows in the arrays specifies the number controllers to create. the linear controllers are returned as cell array of mpc
objects.
u = [ulow; umedium; uhigh]; x = [xlow; xmedium; xhigh]; mpcobjs = converttompc(nlobj,x,u);
view the properties of the mpcobjlow
controller.
mpcobjlow
mpc object (created on 19-aug-2023 23:31:32): --------------------------------------------- sampling time: 1 (seconds) prediction horizon: 10 control horizon: 3 plant model: -------------- 1 manipulated variable(s) -->| 4 states | | |--> 1 measured output(s) 1 measured disturbance(s) -->| 2 inputs | | |--> 0 unmeasured output(s) 0 unmeasured disturbance(s) -->| 1 outputs | -------------- indices: (input vector) manipulated variables: [1 ] measured disturbances: [2 ] (output vector) measured outputs: [1 ] disturbance and noise models: output disturbance model: default (type "getoutdist(mpcobjlow)" for details) measurement noise model: default (unity gain after scaling) weights: manipulatedvariables: 0 manipulatedvariablesrate: 0.1000 outputvariables: 1 ecr: 100000 state estimation: default kalman filter (type "getestimator(mpcobjlow)" for details) constraints: 0.0704 <= u1 <= 0.7042, u1/rate is unconstrained, y1 is unconstrained use built-in "active-set" qp solver with maxiterations of 120.
plan optimal trajectory using nonlinear mpc
this example uses:
create a nonlinear mpc controller with six states, six outputs, and four inputs.
nx = 6; ny = 6; nu = 4; nlobj = nlmpc(nx,ny,nu);
zero weights are applied to one or more ovs because there are fewer mvs than ovs.
specify the controller sample time and horizons.
ts = 0.4; p = 30; c = 4; nlobj.ts = ts; nlobj.predictionhorizon = p; nlobj.controlhorizon = c;
specify the prediction model state function and the jacobian of the state function. for this example, use a model of a flying robot.
nlobj.model.statefcn = "flyingrobotstatefcn"; nlobj.jacobian.statefcn = "flyingrobotstatejacobianfcn";
specify a custom cost function for the controller that replaces the standard cost function.
nlobj.optimization.customcostfcn = @(x,u,e,data) ts*sum(sum(u(1:p,:))); nlobj.optimization.replacestandardcost = true;
specify a custom constraint function for the controller.
nlobj.optimization.customeqconfcn = @(x,u,data) x(end,:)';
specify linear constraints on the manipulated variables.
for ct = 1:nu nlobj.mv(ct).min = 0; nlobj.mv(ct).max = 1; end
validate the prediction model and custom functions at the initial states (x0
) and initial inputs (u0
) of the robot.
x0 = [-10;-10;pi/2;0;0;0]; u0 = zeros(nu,1); validatefcns(nlobj,x0,u0);
model.statefcn is ok. jacobian.statefcn is ok. no output function specified. assuming "y = x" in the prediction model. optimization.customcostfcn is ok. optimization.customeqconfcn is ok. analysis of user-provided model, cost, and constraint functions complete.
compute the optimal state and manipulated variable trajectories, which are returned in the info
.
[~,~,info] = nlmpcmove(nlobj,x0,u0);
slack variable unused or zero-weighted in your custom cost function. all constraints will be hard.
plot the optimal trajectories.
flyingrobotplotplanning(info,ts)
optimal fuel consumption = 1.884953
simulate closed-loop control using nonlinear mpc controller
this example uses:
create a nonlinear mpc controller with four states, two outputs, and one input.
nlobj = nlmpc(4,2,1);
zero weights are applied to one or more ovs because there are fewer mvs than ovs.
specify the sample time and horizons of the controller.
ts = 0.1; nlobj.ts = ts; nlobj.predictionhorizon = 10; nlobj.controlhorizon = 5;
specify the state function for the controller, which is in the file pendulumdt0.m
. this discrete-time model integrates the continuous time model defined in pendulumct0.m
using a multistep forward euler method.
nlobj.model.statefcn = "pendulumdt0";
nlobj.model.iscontinuoustime = false;
the prediction model uses an optional parameter, ts
, to represent the sample time. specify the number of parameters.
nlobj.model.numberofparameters = 1;
specify the output function of the model, passing the sample time parameter as an input argument.
nlobj.model.outputfcn = @(x,u,ts) [x(1); x(3)];
define standard constraints for the controller.
nlobj.weights.outputvariables = [3 3]; nlobj.weights.manipulatedvariablesrate = 0.1; nlobj.ov(1).min = -10; nlobj.ov(1).max = 10; nlobj.mv.min = -100; nlobj.mv.max = 100;
validate the prediction model functions.
x0 = [0.1;0.2;-pi/2;0.3]; u0 = 0.4; validatefcns(nlobj, x0, u0, [], {ts});
model.statefcn is ok. model.outputfcn is ok. analysis of user-provided model, cost, and constraint functions complete.
only two of the plant states are measurable. therefore, create an extended kalman filter for estimating the four plant states. its state transition function is defined in pendulumstatefcn.m
and its measurement function is defined in pendulummeasurementfcn.m
.
ekf = extendedkalmanfilter(@pendulumstatefcn,@pendulummeasurementfcn);
define initial conditions for the simulation, initialize the extended kalman filter state, and specify a zero initial manipulated variable value.
x = [0;0;-pi;0]; y = [x(1);x(3)]; ekf.state = x; mv = 0;
specify the output reference value.
yref = [0 0];
create an nlmpcmoveopt
object, and specify the sample time parameter.
nloptions = nlmpcmoveopt; nloptions.parameters = {ts};
run the simulation for 10
seconds. during each control interval:
correct the previous prediction using the current measurement.
compute optimal control moves using
nlmpcmove
. this function returns the computed optimal sequences innloptions
. passing the updated options object tonlmpcmove
in the next control interval provides initial guesses for the optimal sequences.predict the model states.
apply the first computed optimal control move to the plant, updating the plant states.
generate sensor data with white noise.
save the plant states.
duration = 10; xhistory = x; for ct = 1:(duration/ts) % correct previous prediction xk = correct(ekf,y); % compute optimal control moves [mv,nloptions] = nlmpcmove(nlobj,xk,mv,yref,[],nloptions); % predict prediction model states for the next iteration predict(ekf,[mv; ts]); % implement first optimal control move x = pendulumdt0(x,mv,ts); % generate sensor data y = x([1 3]) randn(2,1)*0.01; % save plant states xhistory = [xhistory x]; end
plot the resulting state trajectories.
figure subplot(2,2,1) plot(0:ts:duration,xhistory(1,:)) xlabel('time') ylabel('z') title('cart position') subplot(2,2,2) plot(0:ts:duration,xhistory(2,:)) xlabel('time') ylabel('zdot') title('cart velocity') subplot(2,2,3) plot(0:ts:duration,xhistory(3,:)) xlabel('time') ylabel('theta') title('pendulum angle') subplot(2,2,4) plot(0:ts:duration,xhistory(4,:)) xlabel('time') ylabel('thetadot') title('pendulum velocity')
version history
introduced in r2018b
see also
functions
- | | |
objects
- |
blocks
topics
- trajectory optimization and control of flying robot using nonlinear mpc
- (robotics system toolbox)
打开示例
您曾对此示例进行过修改。是否要打开带有您的编辑的示例?
matlab 命令
您点击的链接对应于以下 matlab 命令:
请在 matlab 命令行窗口中直接输入以执行命令。web 浏览器不支持 matlab 命令。
select a web site
choose a web site to get translated content where available and see local events and offers. based on your location, we recommend that you select: .
you can also select a web site from the following list:
how to get best site performance
select the china site (in chinese or english) for best site performance. other mathworks country sites are not optimized for visits from your location.
americas
- (español)
- (english)
- (english)
europe
- (english)
- (english)
- (deutsch)
- (español)
- (english)
- (français)
- (english)
- (italiano)
- (english)
- (english)
- (english)
- (deutsch)
- (english)
- (english)
- switzerland
- (english)
asia pacific
- (english)
- (english)
- (english)
- 中国
- (日本語)
- (한국어)