tune pi controller using reinforcement learning -凯发k8网页登录
this example shows how to tune the two gains of a pi controller using the twin-delayed deep deterministic policy gradient (td3) reinforcement learning algorithm. the performance of the tuned controller is compared with that of a controller tuned using the control system tuner app. using the control system tuner app to tune controllers in simulink® requires simulink control design™ software.
for relatively simple control tasks with a small number of tunable parameters, model-based tuning techniques can get good results with a faster tuning process compared to model-free rl-based methods. however, rl methods can be more suitable for highly nonlinear systems or adaptive controller tuning. to facilitate the controller comparison, both tuning methods use a linear quadratic gaussian (lqg) objective function. for an example that uses a ddpg agent to implement an lqr controller, see train ddpg agent to control double integrator system.
this example uses a reinforcement learning (rl) agent to compute the gains for a pi controller. for an example that replaces the pi controller with a neural network controller, see create simulink environment and train agent.
environment model
the environment model for this example is a water tank model. the goal of this control system is to maintain the level of water in a tank to match a reference value.
open_system('watertanklqg')
the model includes process noise with variance .
to maintain the water level while minimizing control effort u
, the controllers in this example use the following lqg criterion.
to simulate the controller in this model, you must specify the simulation time tf
and the controller sample time ts
in seconds.
ts = 0.1; tf = 10;
for more information about the water tank model, see watertank simulink model (simulink control design).
tune pi controller using control system tuner
to tune a controller in simulink using control system tuner, you must specify the controller block as a tuned block and define the goals for the tuning process. for more information on using control system tuner, see (simulink control design).
for this example, open the saved session controlsystemtunersession.mat
using control system tuner. this session specifies the pid controller block in the watertanklqg
model as a tuned block and contains an lqg tuning goal.
controlsystemtuner("controlsystemtunersession")
to tune the controller, on the tuning tab, click tune.
the tuned proportional and integral gains are approximately 9.8 and 1e-6, respectively.
kp_cst = 9.80199999804512; ki_cst = 1.00019996230706e-06;
create environment for training agent
to define the model for training the rl agent, modify the water tank model using the following steps.
delete the pid controller.
insert an rl agent block.
create the observation vector where , is the height of the water in the tank, and is the reference water height. connect the observation signal to the rl agent block.
define the reward function for the rl agent as the negative of the lqg cost , that is, . the rl agent maximizes this reward, thus minimizing the lqg cost.
the resulting model is rlwatertankpidtune.slx
.
mdl = 'rlwatertankpidtune';
open_system(mdl)
create the environment interface object. to do so, use the localcreatepidenv
function defined at the end of this example.
[env,obsinfo,actinfo] = localcreatepidenv(mdl);
extract the observation and action dimensions for this environment. use prod(obsinfo.dimension)
and prod(actinfo.dimension)
to return the number of dimensions of the observation and action spaces, respectively, regardless of whether they are arranged as row vectors, column vectors, or matrices.
numobs = prod(obsinfo.dimension); numact = prod(actinfo.dimension);
fix the random generator seed for reproducibility.
rng(0)
create td3 agent
to create the actor, first create a deep neural network with the observation input and the action output. for more information, see rlcontinuousdeterministicactor
.
you can model a pi controller as a neural network with one fully-connected layer with error and error integral observations.
here:
u
is the output of the actor neural network.kp
andki
are the absolute values of the neural network weights., is the height of the tank, and is the reference height.
gradient descent optimization can drive the weights to negative values. to avoid negative weights, replace normal fullyconnectedlayer
with a fullyconnectedpilayer
. this layer ensures that the weights are positive by implementing the function . this layer is defined in fullyconnectedpilayer.m
. for more information on defining custom layers, see .
initialgain = single([1e-3 2]);
actornet = [
featureinputlayer(numobs)
fullyconnectedpilayer(initialgain,'actoutlyr')
];
actornet = dlnetwork(actornet);
actor = rlcontinuousdeterministicactor(actornet,obsinfo,actinfo);
the agent in this example is a twin-delayed deep deterministic policy gradient (td3) agent. td3 agents rely on actor and critic approximator objects to learn the optimal policy.
a td3 agent approximates the long-term reward given observations and actions using two critic value-function representations. to create the critics, first create a deep neural network with two inputs, the observation and action, and one output.
to create the critics, use the localcreatecriticnetwork
function defined at the end of this example. use the same network structure for both critic representations.
criticnet = localcreatecriticnetwork(numobs,numact);
create the critic objects using the specified neural network and the environment action and observation specifications. pass as additional arguments also the names of the network layers to be connected with the observation and action channels.
critic1 = rlqvaluefunction(dlnetwork(criticnet), ... obsinfo,actinfo,... observationinputnames='stateinlyr', ... actioninputnames='actioninlyr'); critic2 = rlqvaluefunction(dlnetwork(criticnet), ... obsinfo,actinfo,... observationinputnames='stateinlyr', ... actioninputnames='actioninlyr');
critic = [critic1 critic2];
configure the agent using the following options.
set the agent to use the controller sample time
ts
.set the mini-batch size to 128 experience samples.
set the experience buffer length to 1e6.
set the exploration model and target policy smoothing model to use gaussian noise with variance of 0.1.
specify training options for the actor and critic.
actoropts = rloptimizeroptions( ... learnrate=1e-3, ... gradientthreshold=1); criticopts = rloptimizeroptions( ... learnrate=1e-3, ... gradientthreshold=1);
specify the td3 agent options using rltd3agentoptions
. include the training options for the actor and critic.
agentopts = rltd3agentoptions(... sampletime=ts,... minibatchsize=128, ... experiencebufferlength=1e6,... actoroptimizeroptions=actoropts,... criticoptimizeroptions=criticopts);
you can also set or modify the agent options using dot notation.
agentopts.targetpolicysmoothmodel.standarddeviation = sqrt(0.1);
create the td3 agent using the specified actor representation, critic representation, and agent options. for more information, see rltd3agentoptions
.
agent = rltd3agent(actor,critic,agentopts);
train agent
to train the agent, first specify the following training options.
run each training for at most 1000 episodes, with each episode lasting at most 100 time steps.
display the training progress in the episode manager (set the
plots
option) and disable the command-line display (set theverbose
option).stop training when the agent receives an average cumulative reward greater than -355 over 100 consecutive episodes. at this point, the agent can control the level of water in the tank.
for more information, see rltrainingoptions
.
maxepisodes = 1000; maxsteps = ceil(tf/ts); trainopts = rltrainingoptions(... maxepisodes=maxepisodes, ... maxstepsperepisode=maxsteps, ... scoreaveragingwindowlength=100, ... verbose=false, ... plots="training-progress",... stoptrainingcriteria="averagereward",... stoptrainingvalue=-355);
train the agent using the train
function. training this agent is a computationally intensive process that takes several minutes to complete. to save time while running this example, load a pretrained agent by setting dotraining
to false
. to train the agent yourself, set dotraining
to true
.
dotraining = false; if dotraining % train the agent. trainingstats = train(agent,env,trainopts); else % load pretrained agent for the example. load("watertankpidtd3.mat","agent") end
validate trained agent
validate the learned agent against the model by simulation.
simopts = rlsimulationoptions(maxsteps=maxsteps); experiences = sim(env,agent,simopts);
the integral and proportional gains of the pi controller are the absolute weights of the actor representation. to obtain the weights, first extract the learnable parameters from the actor.
actor = getactor(agent); parameters = getlearnableparameters(actor);
obtain the controller gains.
ki = abs(parameters{1}(1))
ki = single
0.3958
kp = abs(parameters{1}(2))
kp = single
8.0822
apply the gains obtained from the rl agent to the original pi controller block and run a step-response simulation.
mdltest = 'watertanklqg'; open_system(mdltest); set_param([mdltest '/pid controller'],'p',num2str(kp)) set_param([mdltest '/pid controller'],'i',num2str(ki)) sim(mdltest)
extract the step response information, lqg cost, and stability margin for the simulation. to compute the stability margin, use the localstabilityanalysis
function defined at the end of this example.
rlstep = simout; rlcost = cost; rlstabilitymargin = localstabilityanalysis(mdltest);
apply the gains obtained using control system tuner to the original pi controller block and run a step-response simulation.
set_param([mdltest '/pid controller'],'p',num2str(kp_cst)) set_param([mdltest '/pid controller'],'i',num2str(ki_cst)) sim(mdltest) cststep = simout; cstcost = cost; cststabilitymargin = localstabilityanalysis(mdltest);
compare controller performance
plot the step response for each system.
figure plot(cststep) hold on plot(rlstep) grid on legend('control system tuner','rl',location="southeast") title('step response')
analyze the step response for both simulations.
rlstepinfo = stepinfo(rlstep.data,rlstep.time); cststepinfo = stepinfo(cststep.data,cststep.time); stepinfotable = struct2table([cststepinfo rlstepinfo]); stepinfotable = removevars(stepinfotable,{'settlingmin', ... 'transienttime','settlingmax','undershoot','peaktime'}); stepinfotable.properties.rownames = {'cst','rl'}; stepinfotable
stepinfotable=2×4 table
risetime settlingtime overshoot peak
________ ____________ _________ ______
cst 0.77737 1.3278 0.33125 9.9023
rl 0.98024 1.7073 0.40451 10.077
analyze the stability for both simulations.
stabilitymargintable = struct2table( ... [cststabilitymargin rlstabilitymargin]); stabilitymargintable = removevars(stabilitymargintable,{... 'gmfrequency','pmfrequency','delaymargin','dmfrequency'}); stabilitymargintable.properties.rownames = {'cst','rl'}; stabilitymargintable
stabilitymargintable=2×3 table
gainmargin phasemargin stable
__________ ___________ ______
cst 8.1616 84.124 true
rl 9.9226 84.241 true
compare the cumulative lqg cost for the two controllers. the rl-tuned controller produces a slightly more optimal solution.
rlcumulativecost = sum(rlcost.data)
rlcumulativecost = -375.9135
cstcumulativecost = sum(cstcost.data)
cstcumulativecost = -376.9373
both controllers produce stable responses, with the controller tuned using control system tuner producing a faster response. however, the rl tuning method produces a higher gain margin and a more optimal solution.
local functions
function to create the water tank rl environment.
function [env,obsinfo,actinfo] = localcreatepidenv(mdl) % define the observation specification obsinfo % and the action specification actinfo. obsinfo = rlnumericspec([2 1]); obsinfo.name = 'observations'; obsinfo.description = 'integrated error and error'; actinfo = rlnumericspec([1 1]); actinfo.name = 'pid output'; % build the environment interface object. env = rlsimulinkenv(mdl,[mdl '/rl agent'],obsinfo,actinfo); % set a cutom reset function that randomizes % the reference values for the model. env.resetfcn = @(in)localresetfcn(in,mdl); end
function to randomize the reference signal and initial height of the water tank at the beginning of each episode.
function in = localresetfcn(in,mdl) % randomize reference signal blk = sprintf([mdl '/desired \nwater level']); href = 10 4*(rand-0.5); in = setblockparameter(in,blk,'value',num2str(href)); % randomize initial height hinit = 0; blk = [mdl '/water-tank system/h']; in = setblockparameter(in,blk,'initialcondition',num2str(hinit)); end
function to linearize and compute stability margins of the siso water tank system.
function margin = localstabilityanalysis(mdl) io(1) = linio([mdl '/sum1'],1,'input'); io(2) = linio([mdl '/water-tank system'],1,'openoutput'); op = operpoint(mdl); op.time = 5; linsys = linearize(mdl,io,op); margin = allmargin(linsys); end
function to create critic network.
function criticnet = localcreatecriticnetwork(numobs,numact) statepath = [ featureinputlayer(numobs,name='stateinlyr') fullyconnectedlayer(32,name='fc1')]; actionpath = [ featureinputlayer(numact,name='actioninlyr') fullyconnectedlayer(32,name='fc2')]; commonpath = [ concatenationlayer(1,2,name='concat') relulayer fullyconnectedlayer(32) relulayer fullyconnectedlayer(1,name='qvaloutlyr')]; criticnet = layergraph(); criticnet = addlayers(criticnet,statepath); criticnet = addlayers(criticnet,actionpath); criticnet = addlayers(criticnet,commonpath); criticnet = connectlayers(criticnet,'fc1','concat/in1'); criticnet = connectlayers(criticnet,'fc2','concat/in2'); end
see also
functions
train
|sim
|rlsimulinkenv