main content

create simulink environment and train agent -凯发k8网页登录

this example shows how to convert the pi controller in the watertank simulink® model to a reinforcement learning deep deterministic policy gradient (ddpg) agent. for an example that trains a ddpg agent in matlab®, see compare ddpg agent to lqr controller.

water tank model

the original model for this example is the water tank model. the goal is to control the level of the water in the tank. for more information about the water tank model, see watertank simulink model (simulink control design).

modify the original model by making the following changes:

  1. delete the pid controller.

  2. insert the rl agent block.

  3. connect the observation vector [edteh]t, where h is the height of the water in the tank, e=r-h, and r is the reference height.

  4. set up the reward reward=10(|e|<0.1)-1(|e|0.1)-100(h0||h20).

  5. configure the termination signal such that the simulation stops if h0 or h20.

the resulting model is rlwatertank.slx. for more information on this model and the changes, see create custom simulink environments.

open_system("rlwatertank")

create the environment

creating an environment model includes defining the following:

define the observation specification obsinfo and action specification actinfo.

% observation info
obsinfo = rlnumericspec([3 1],...
    lowerlimit=[-inf -inf 0  ]',...
    upperlimit=[ inf  inf inf]');
% name and description are optional and not used by the software
obsinfo.name = "observations";
obsinfo.description = "integrated error, error, and measured height";
% action info
actinfo = rlnumericspec([1 1]);
actinfo.name = "flow";

create the environment object.

env = rlsimulinkenv("rlwatertank","rlwatertank/rl agent",...
    obsinfo,actinfo);

set a custom reset function that randomizes the reference values for the model.

env.resetfcn = @(in)localresetfcn(in);

specify the simulation time tf and the agent sample time ts in seconds.

ts = 1.0;
tf = 200;

fix the random generator seed for reproducibility.

rng(0)

create the critic

ddpg agents use a parametrized q-value function approximator to estimate the value of the policy. a q-value function critic takes the current observation and an action as inputs and returns a single scalar as output (the estimated discounted cumulative long-term reward for which receives the action from the state corresponding to the current observation, and following the policy thereafter).

to model the parametrized q-value function within the critic, use a neural network with two input layers (one for the observation channel, as specified by obsinfo, and the other for the action channel, as specified by actinfo) and one output layer (which returns the scalar value).

define each network path as an array of layer objects. assign names to the input and output layers of each path. these names allow you to connect the paths and then later explicitly associate the network input and output layers with the appropriate environment channel. obtain the dimension of the observation and action spaces from the obsinfo and actinfo specifications.

% observation path
obspath = [
    featureinputlayer(obsinfo.dimension(1),name="obsinlyr")
    fullyconnectedlayer(50)
    relulayer
    fullyconnectedlayer(25,name="obspathoutlyr")
    ];
% action path
actpath = [
    featureinputlayer(actinfo.dimension(1),name="actinlyr")
    fullyconnectedlayer(25,name="actpathoutlyr")
    ];
% common path
commonpath = [
    additionlayer(2,name="add")
    relulayer
    fullyconnectedlayer(1,name="qvalue")
    ];
criticnetwork = layergraph();
criticnetwork = addlayers(criticnetwork,obspath);
criticnetwork = addlayers(criticnetwork,actpath);
criticnetwork = addlayers(criticnetwork,commonpath);
criticnetwork = connectlayers(criticnetwork, ...
    "obspathoutlyr","add/in1");
criticnetwork = connectlayers(criticnetwork, ...
    "actpathoutlyr","add/in2");

view the critic network configuration.

figure
plot(criticnetwork)

figure contains an axes object. the axes object contains an object of type graphplot.

convert the network to a dlnetwork object and summarize its properties.

criticnetwork = dlnetwork(criticnetwork);
summary(criticnetwork)
   initialized: true
   number of learnables: 1.5k
   inputs:
      1   'obsinlyr'   3 features
      2   'actinlyr'   1 features

create the critic approximator object using the specified deep neural network, the environment specification objects, and the names if the network inputs to be associated with the observation and action channels.

critic = rlqvaluefunction(criticnetwork, ...
    obsinfo,actinfo, ...
    observationinputnames="obsinlyr", ...
    actioninputnames="actinlyr");

for more information on q-value function objects, see rlqvaluefunction.

check the critic with a random input observation and action.

getvalue(critic, ...
    {rand(obsinfo.dimension)}, ...
    {rand(actinfo.dimension)})
ans = single
    -0.1631

for more information on creating critics, see create policies and value functions.

create the actor

ddpg agents use a parametrized deterministic policy over continuous action spaces, which is learned by a continuous deterministic actor.

a continuous deterministic actor implements a parametrized deterministic policy for a continuous action space. this actor takes the current observation as input and returns as output an action that is a deterministic function of the observation.

to model the parametrized policy within the actor, use a neural network with one input layer (which receives the content of the environment observation channel, as specified by obsinfo) and one output layer (which returns the action to the environment action channel, as specified by actinfo).

define the network as an array of layer objects.

actornetwork = [
    featureinputlayer(obsinfo.dimension(1))
    fullyconnectedlayer(3)
    tanhlayer
    fullyconnectedlayer(actinfo.dimension(1))
    ];

convert the network to a dlnetwork object and summarize its properties.

actornetwork = dlnetwork(actornetwork);
summary(actornetwork)
   initialized: true
   number of learnables: 16
   inputs:
      1   'input'   3 features

create the actor approximator object using the specified deep neural network, the environment specification objects, and the name if the network input to be associated with the observation channel.

actor = rlcontinuousdeterministicactor(actornetwork,obsinfo,actinfo);

for more information, see rlcontinuousdeterministicactor.

check the actor with a random input observation.

getaction(actor,{rand(obsinfo.dimension)})
ans = 1x1 cell array
    {[-0.3408]}

for more information on creating critics, see create policies and value functions.

create the ddpg agent

create the ddpg agent using the specified actor and critic approximator objects.

agent = rlddpgagent(actor,critic);

for more information, see .

specify options for the agent, the actor, and the critic using dot notation.

agent.sampletime = ts;
agent.agentoptions.targetsmoothfactor = 1e-3;
agent.agentoptions.discountfactor = 1.0;
agent.agentoptions.minibatchsize = 64;
agent.agentoptions.experiencebufferlength = 1e6; 
agent.agentoptions.noiseoptions.variance = 0.3;
agent.agentoptions.noiseoptions.variancedecayrate = 1e-5;
agent.agentoptions.criticoptimizeroptions.learnrate = 1e-03;
agent.agentoptions.criticoptimizeroptions.gradientthreshold = 1;
agent.agentoptions.actoroptimizeroptions.learnrate = 1e-04;
agent.agentoptions.actoroptimizeroptions.gradientthreshold = 1;

alternatively, you can specify the agent options using an object.

check the agent with a random input observation.

getaction(agent,{rand(obsinfo.dimension)})
ans = 1x1 cell array
    {[-0.7926]}

train agent

to train the agent, first specify the training options. for this example, use the following options:

  • run each training for at most 5000 episodes. specify that each episode lasts for at most ceil(tf/ts) (that is 200) time steps.

  • display the training progress in the episode manager dialog box (set the plots option) and disable the command line display (set the verbose option to false).

  • stop training when the agent receives an average cumulative reward greater than 800 over 20 consecutive episodes. at this point, the agent can control the level of water in the tank.

for more information, see rltrainingoptions.

trainopts = rltrainingoptions(...
    maxepisodes=5000, ...
    maxstepsperepisode=ceil(tf/ts), ...
    scoreaveragingwindowlength=20, ...
    verbose=false, ...
    plots="training-progress",...
    stoptrainingcriteria="averagereward",...
    stoptrainingvalue=800);

train the agent using the train function. training is a computationally intensive process that takes several minutes to complete. to save time while running this example, load a pretrained agent by setting dotraining to false. to train the agent yourself, set dotraining to true.

dotraining = false;
if dotraining
    % train the agent.
    trainingstats = train(agent,env,trainopts);
else
    % load the pretrained agent for the example.
    load("watertankddpg.mat","agent")
end

validate trained agent

validate the learned agent against the model by simulation. since the reset function randomizes the reference values, fix the random generator seed to ensure simulation reproducibility.

rng(1)

simulate the agent within the environment, and return the experiences as output.

simopts = rlsimulationoptions(maxsteps=ceil(tf/ts),stoponerror="on");
experiences = sim(env,agent,simopts);

local reset function

function in = localresetfcn(in)
% randomize reference signal
blk = sprintf("rlwatertank/desired \nwater level");
h = 3*randn   10;
while h <= 0 || h >= 20
    h = 3*randn   10;
end
in = setblockparameter(in,blk,value=num2str(h));
% randomize initial height
h = 3*randn   10;
while h <= 0 || h >= 20
    h = 3*randn   10;
end
blk = "rlwatertank/water-tank system/h";
in = setblockparameter(in,blk,initialcondition=num2str(h));
end

see also

functions

objects

related examples

more about

网站地图