main content

trajectory optimization and control of flying robot using nonlinear mpc -凯发k8网页登录

this example shows how to find the optimal trajectory that brings a flying robot from one location to another with minimum fuel cost using a nonlinear mpc controller. in addition, another nonlinear mpc controller, along with an extended kalman filter, drives the robot along the optimal trajectory in closed-loop simulation.

flying robot

the flying robot in this example has four thrusters to move it around in a 2-d space. the model has six states:

  • x(1) - x inertial coordinate of center of mass

  • x(2) - y inertial coordinate of center of mass

  • x(3) - theta, robot (thrust) direction

  • x(4) - vx, velocity of x

  • x(5) - vy, velocity of y

  • x(6) - omega, angular velocity of theta

for more information on the flying robot, see [1]. the model in the paper uses two thrusts ranging from -1 to 1. however, this example assumes that there are four physical thrusts in the robot, ranging from 0 to 1, to achieve the same control freedom.

trajectory planning

the robot initially rests at [-10,-10] with an orientation angle of pi/2 radians (facing north). the flying maneuver for this example is to move and park the robot at the final location [0,0] with an angle of 0 radians (facing east) in 12 seconds. the goal is to find the optimal path such that the total amount of fuel consumed by the thrusters during the maneuver is minimized.

nonlinear mpc is an ideal tool for trajectory planning problems because it solves an open-loop constrained nonlinear optimization problem given the current plant states. with the availability of a nonlinear dynamic model, mpc can make more accurate decisions.

in this example, the target prediction time is 12 seconds. therefore, specify a sample time of 0.4 seconds and prediction horizon of 30 steps. create a multistage nonlinear mpc object with 6 states and 4 inputs. by default, all the inputs are manipulated variables (mvs).

ts = 0.4;
p = 30;
nx = 6;
nu = 4;
nlobj = nlmpcmultistage(p,nx,nu);
nlobj.ts = ts;

for a path planning problem, it is typical to allow mpc to have free moves at each prediction step, which provides the maximum number of decision variables for the optimization problem. since planning usually runs at a much slower sampling rate than a feedback controller, the extra computation load introduced by a larger optimization problem can be accepted.

specify the prediction model state function using the function name. you can also specify functions using a function handle. for details on the state function, open flyingrobotstatefcn.m. for more information on specifying the prediction model, see .

nlobj.model.statefcn = "flyingrobotstatefcn";

specify the jacobian of the state function using a function handle. it is best practice to provide an analytical jacobian for the prediction model. doing so significantly improves simulation efficiency. for details on the jacobian function, open flyingrobotstatejacobianfcn.m.

nlobj.model.statejacfcn = @flyingrobotstatejacobianfcn;

a trajectory planning problem usually involves a nonlinear cost function, which can be used to find the shortest distance, the maximal profit, or as in this case, the minimal fuel consumption. because the thrust value is a direct indicator of fuel consumption, compute the fuel cost as the sum of the thrust values at each prediction step from stage 1 to stage p. specify this cost function using a named function. for more information on specifying cost functions, see .

for ct = 1:p
    nlobj.stages(ct).costfcn = 'flyingrobotcostfcn';
end

the goal of the maneuver is to park the robot at [0,0] with an angle of 0 radians at the 12th second. specify this goal as the terminal state constraint, where every position and velocity state at the last prediction step (stage p 1) should be zero. for more information on specifying constraint functions, see .

nlobj.model.terminalstate = zeros(6,1);

it is best practice to provide analytical jacobian functions for your stage cost and constraint functions as well. however, this example intentionally skips them so that their jacobian is computed by the nonlinear mpc controller using the built-in numerical perturbation method.

each thrust has an operating range between 0 and 1, which is translated into lower and upper bounds on the mvs.

for ct = 1:nu
    nlobj.mv(ct).min = 0;
    nlobj.mv(ct).max = 1;
end

specify the initial conditions for the robot.

x0 = [-10;-10;pi/2;0;0;0];  % robot parks at [-10, -10], facing north
u0 = zeros(nu,1);           % thrust is zero

it is best practice to validate the user-provided model, cost, and constraint functions and their jacobians. to do so, use the validatefcns command.

validatefcns(nlobj,x0,u0);
model.statefcn is ok.
model.statejacfcn is ok.
"costfcn" of the following stages [1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30] are ok.
analysis of user-provided model, cost, and constraint functions complete.

the optimal state and mv trajectories can be found by calling the nlmpcmove command once, given the current state x0 and last mv u0. the optimal cost and trajectories are returned as part of the info output argument.

[~,~,info] = nlmpcmove(nlobj,x0,u0);

plot the optimal trajectory. the optimal cost is 7.8.

flyingrobotplotplanning(info,ts);
optimal fuel consumption =   7.797825

the first plot shows the optimal trajectory of the six robot states during the maneuver. the second plot shows the corresponding optimal mv profiles for the four thrusts. the third plot shows the x-y position trajectory of the robot, moving from [-10 -10 pi/2] to [0 0 0].

feedback control for path following

after the optimal trajectory is found, a feedback controller is required to move the robot along the path. in theory, you can apply the optimal mv profile directly to the thrusters to implement feed-forward control. however, in practice, a feedback controller is needed to reject disturbances and compensate for modeling errors.

you can use different feedback control techniques for tracking. in this example, you use a generic nonlinear mpc controller to move the robot to the final location. in this path tracking problem, you track references for all six states (the number of outputs equals the number of states).

ny = 6;
nlobj_tracking = nlmpc(nx,ny,nu);
zero weights are applied to one or more ovs because there are fewer mvs than ovs.

use the same state function and its jacobian function.

nlobj_tracking.model.statefcn = nlobj.model.statefcn;
nlobj_tracking.jacobian.statefcn = nlobj.model.statejacfcn;

for tracking control applications, reduce the computational effort by specifying shorter prediction horizon (no need to look far into the future) and control horizon (for example, free moves are allocated at the first few prediction steps).

nlobj_tracking.ts = ts;
nlobj_tracking.predictionhorizon = 10;
nlobj_tracking.controlhorizon = 4;

the default cost function in nonlinear mpc is a standard quadratic cost function suitable for reference tracking and disturbance rejection. for tracking, tracking error has higher priority (larger penalty weights on outputs) than control efforts (smaller penalty weights on mv rates).

nlobj_tracking.weights.manipulatedvariablesrate = 0.2*ones(1,nu);
nlobj_tracking.weights.outputvariables = 5*ones(1,nx);

set the same bounds for the thruster inputs.

for ct = 1:nu
    nlobj_tracking.mv(ct).min = 0;
    nlobj_tracking.mv(ct).max = 1;
end

also, to reduce fuel consumption, it is clear that u1 and u2 cannot be positive at any time during the operation. therefore, implement equality constraints such that u(1)*u(2) must be 0 for all prediction steps. apply similar constraints for u3 and u4.

nlobj_tracking.optimization.customeqconfcn = ...
    @(x,u,data) [u(1:end-1,1).*u(1:end-1,2); u(1:end-1,3).*u(1:end-1,4)];

validate your prediction model and custom functions, and their jacobians.

validatefcns(nlobj_tracking,x0,u0);
model.statefcn is ok.
jacobian.statefcn is ok.
no output function specified. assuming "y = x" in the prediction model.
optimization.customeqconfcn is ok.
analysis of user-provided model, cost, and constraint functions complete.

nonlinear state estimation

in this example, only the three position states (x, y and angle) are measured. the velocity states are unmeasured and must be estimated. use an extended kalman filter (ekf) from control system toolbox™ for nonlinear state estimation.

because an ekf requires a discrete-time model, you use the trapezoidal rule to transition from x(k) to x(k 1), which requires the solution of nx nonlinear algebraic equations. for more information, open flyingrobotstatefcndiscretetime.m.

dstatefcn = @(xk,uk,ts) flyingrobotstatefcndiscretetime(xk,uk,ts);

measurement can help the ekf correct its state estimation. only the first three states are measured.

dmeasfcn = @(xk) xk(1:3);

create the ekf, and indicate that the measurements have little noise.

ekf = extendedkalmanfilter(dstatefcn,dmeasfcn,x0);
ekf.measurementnoise = 0.01;

closed-loop simulation of tracking control

simulate the system for 32 steps with correct initial conditions.

tsteps = 32;
xhistory = x0';
uhistory = [];
lastmv = zeros(nu,1);

the reference signals are the optimal state trajectories computed at the planning stage. when passing these trajectories to the nonlinear mpc controller, the current and future trajectory is available for previewing.

xopt = info.xopt;
xref = [xopt(2:p 1,:);repmat(xopt(end,:),tsteps-p,1)];

use nlmpcmove and nlmpcmoveopt command for closed-loop simulation.

hbar = waitbar(0,'simulation progress');
options = nlmpcmoveopt;
for k = 1:tsteps
    % obtain plant output measurements with sensor noise.
    yk = xhistory(k,1:3)'   randn*0.01;
    % correct state estimation based on the measurements.
    xk = correct(ekf, yk);
    % compute the control moves with reference previewing.
    [uk,options] = nlmpcmove(nlobj_tracking,xk,lastmv,xref(k:min(k 9,tsteps),:),[],options);
    % predict the state for the next step.
    predict(ekf,uk,ts);
    % store the control move and update the last mv for the next step.
    uhistory(k,:) = uk'; %#ok<*sagrow>
    lastmv = uk;
    % update the real plant states for the next step by solving the
    % continuous-time odes based on current states xk and input uk.
    odefun = @(t,xk) flyingrobotstatefcn(xk,uk);
    [tout,yout] = ode45(odefun,[0 ts], xhistory(k,:)');
    % store the state values.
    xhistory(k 1,:) = yout(end,:);
    % update the status bar.
    waitbar(k/tsteps, hbar);
end
close(hbar)

compare the planned and actual closed-loop trajectories.

flyingrobotplottracking(info,ts,p,tsteps,xhistory,uhistory);
actual fuel consumption =  10.706483

the nonlinear mpc feedback controller successfully moves the robot (blue blocks), following the optimal trajectory (yellow blocks), and parks it at the final location (red block) in the last figure.

the actual fuel cost is higher than the planned cost. the main reason for this result is that, since we used shorter prediction and control horizons in the feedback controller, the control decision at each interval is suboptimal compared to the optimization problem used in the planning stage.

identify a neural state space model for the flying robot system

in industrial applications, sometimes it is difficult to manually derive a nonlinear state space dynamic model using first principles. an alternative approach to first-principle modeling is black-box modeling based on experiment data.

this section shows you hot to train a neural network to approximate a state function and then use it as prediction model for nonlinear mpc. this appoach relies on the idneuralstatespace object, from the system identification toolbox.

the training procedure is implemented in the supporting file trainflyingrobotneuralstatespacemodel. you can use it is a simple template and modify it to fit your application. it takes about 10 minutes to complete, depending on your computer. to try it, assign the dotraining variable to true instead of false.

dotraining = false;
if dotraining
    nss = trainflyingrobotneuralstatespacemodel;
else
    load nssmodel.mat %#ok<*unrch>
end

generate m files and use them for prediction

after the idneuralstatespace system is trained, you can automatically generate matlab files for both the state function and its analytical jacobian function using the generatematlabfunction command.

nssstatefcnname = 'nssstatefcn';
generatematlabfunction(nss,nssstatefcnname);

use the neural network model as the prediction model in nonlinear mpc

the generated m files are compatible with the interface required by nonlinear mpc object, and therefore, we can directly use them in the nonlinear mpc object

nlobj_tracking.model.statefcn = nssstatefcnname;
nlobj_tracking.jacobian.statefcn = [nssstatefcnname 'jacobian'];

run simulation again with the neural state space prediction model

use nlmpcmove and nlmpcmoveopt command for closed-loop simulation.

ekf = extendedkalmanfilter(dstatefcn,dmeasfcn,x0);
xhistory = x0';
uhistory = [];
lastmv = zeros(nu,1);
hbar = waitbar(0,'simulation progress');
options = nlmpcmoveopt;
for k = 1:tsteps
    % obtain plant output measurements with sensor noise.
    yk = xhistory(k,1:3)';
    % correct state estimation based on the measurements.
    xk = correct(ekf, yk);
    % compute the control moves with reference previewing.
    [uk,options] = nlmpcmove(nlobj_tracking,xk,lastmv,xref(k:min(k 9,tsteps),:),[],options);
    % predict the state for the next step.
    predict(ekf,uk,ts);
    % store the control move and update the last mv for the next step.
    uhistory(k,:) = uk';
    lastmv = uk;
    % update the real plant states for the next step by solving the
    % continuous-time odes based on current states xk and input uk.
    odefun = @(t,xk) flyingrobotstatefcn(xk,uk);
    [tout,yout] = ode45(odefun,[0 ts], xhistory(k,:)');
    % store the state values.
    xhistory(k 1,:) = yout(end,:);
    % update the status bar.
    waitbar(k/tsteps, hbar);
end
close(hbar)

compare the planned and actual closed-loop trajectories. the response is close to what first-principle based prediction model produces.

flyingrobotplottracking(info,ts,p,tsteps,xhistory,uhistory);
actual fuel consumption =  17.978190

references

[1] y. sakawa. "trajectory planning of a free-flying robot by using the optimal control." optimal control applications and methods, vol. 20, 1999, pp. 235-248.

see also

functions

  • |

objects

blocks

related topics

    网站地图