train reinforcement learning policy using custom training loop -凯发k8网页登录
this example shows how to define a custom training loop for a reinforcement learning policy. you can use this workflow to train reinforcement learning policies with your own custom training algorithms rather than using one of the built-in agents from the reinforcement learning toolbox™ software.
using this workflow, you can train policies that use any of the following policy and value function approximators.
- state value function approximator
rlqvaluefunction
- state-action value function approximator with scalar output- state-action function approximator with vector output
rlcontinuousdeterministicactor
- continuous deterministic actor- discrete stochastic actor
- continuous gaussian actor (stochastic)
in this example, a discrete actor policy with a discrete action space is trained using the reinforce algorithm (with no baseline). for more information on the reinforce algorithm, see .
fix the random generator seed for reproducibility.
rng(0)
for more information on the functions you can use for custom training, see functions for custom training.
environment
for this example, a reinforcement learning policy is trained in a discrete cart-pole environment. the objective in this environment is to balance the pole by applying forces (actions) on the cart. create the environment using the rlpredefinedenv
function.
env = rlpredefinedenv("cartpole-discrete");
extract the observation and action specifications from the environment.
obsinfo = getobservationinfo(env); actinfo = getactioninfo(env);
obtain the dimension of the observation space (numobs
) and the number of possible actions (numact
).
numobs = obsinfo.dimension(1); numact = actinfo.dimension(1);
for more information on this environment, see load predefined control system environments.
policy
the reinforcement learning policy in this example is a discrete-action stochastic policy. it is modeled by a deep neural network that contains fullyconnectedlayer
, , and layers. this network outputs probabilities for each discrete action given the current observations. the softmaxlayer
ensures that the actor outputs probability values in the range [0 1] and that all probabilities sum to 1.
create the deep neural network for the actor.
actornetwork = [ featureinputlayer(numobs) fullyconnectedlayer(24) relulayer fullyconnectedlayer(24) relulayer fullyconnectedlayer(2) softmaxlayer ];
convert to dlnetwork
.
actornetwork = dlnetwork(actornetwork);
create the actor using an object.
actor = rldiscretecategoricalactor(actornetwork,obsinfo,actinfo);
accelerate the gradient computation of the actor.
actor = accelerate(actor,true);
evaluate the policy with a random observation as input.
policyevaloutcell = evaluate(actor,{rand(obsinfo.dimension)}); policyevalout = policyevaloutcell{1}
policyevalout = 2x1 single column vector
0.4682
0.5318
create the optimizer using rloptimizer
and rloptimizeroptions
function.
actoropts = rloptimizeroptions(learnrate=1e-2); actoroptimizer = rloptimizer(actoropts);
training setup
configure the training to use the following options:
set up the training to last at most 5000 episodes, with each episode lasting at most 250 steps.
to calculate the discounted reward, choose a discount factor of 0.995.
terminate the training after the maximum number of episodes is reached or when the average reward across 100 episodes reaches the value of 220.
numepisodes = 5000; maxstepsperepisode = 250; discountfactor = 0.995; avgwindowsize = 100; trainingterminationvalue = 220;
create a vector to store the cumulative reward for each training episode.
episodecumulativerewardvector = [];
create a figure for training visualization using the hbuildfigure
helper function.
[trainingplot,linereward,lineavereward] = hbuildfigure;
custom training loop
the algorithm for the custom training loop is as follows. for each episode:
reset the environment.
create buffers for storing experience information: observations, actions, and rewards.
generate experiences until a terminal condition occurs. to do so, evaluate the policy to get actions, apply those actions to the environment, and obtain the resulting observations and rewards. store the actions, observations, and rewards in buffers.
collect the training data as a batch of experiences.
compute the episode monte carlo return, which is the discounted future reward.
compute the gradient of the loss function with respect to the policy parameters.
update the policy using the computed gradients.
update the training visualization.
terminate training if the policy is sufficiently trained.
% enable the training visualization plot. set(trainingplot,visible="on"); % train the policy for the maximum number of episodes % or until the average reward indicates that the policy % is sufficiently trained. for episodect = 1:numepisodes % 1. reset the environment at the start of the episode obs = reset(env); episodereward = zeros(maxstepsperepisode,1); % 2. create buffers to store experiences. % the dimensions for each buffer must be as follows. % % for the observation buffer: % numberofobservations x ... % numberofobservationchannels x ... % batchsize % % for action buffer: % numberofactions x ... % numberofactionchannels x ... % batchsize % % for reward buffer: % 1 x batchsize % observationbuffer = zeros(numobs,1,maxstepsperepisode); actionbuffer = zeros(numact,1,maxstepsperepisode); rewardbuffer = zeros(1,maxstepsperepisode); % 3. generate experiences % for the maximum number of steps per episode % or until a terminal condition is reached. for stepct = 1:maxstepsperepisode % compute an action using the policy % based on the current observation. action = getaction(actor,{obs}); % apply the action to the environment % and obtain the resulting observation and reward. [nextobs,reward,isdone] = step(env,action{1}); % store the action, observation, % and reward experiences in their buffers. observationbuffer(:,:,stepct) = obs; actionbuffer(:,:,stepct) = action{1}; rewardbuffer(:,stepct) = reward; episodereward(stepct) = reward; obs = nextobs; % stop if a terminal condition is reached. if isdone break; end end % 4. create training data. % training is performed using batch data. % the batch size cannot exceed the length of the episode. batchsize = min(stepct,maxstepsperepisode); observationbatch = observationbuffer(:,:,1:batchsize); actionbatch = actionbuffer(:,:,1:batchsize); rewardbatch = rewardbuffer(:,1:batchsize); % compute the discounted future reward. discountedreturn = zeros(1,batchsize); for t = 1:batchsize g = 0; for k = t:batchsize g = g discountfactor ^ (k-t) * rewardbatch(k); end discountedreturn(t) = g; end % 5. organize data to pass to the loss function. lossdata.batchsize = batchsize; lossdata.actinfo = actinfo; lossdata.actionbatch = actionbatch; lossdata.discountedreturn = discountedreturn; % 6. compute the gradient of the loss % with respect to the policy parameters. actorgradient = gradient(actor,@actorlossfunction,... {observationbatch},lossdata); % 7. update the actor network using the computed gradients. % for more information, at the command line, type: % help rl.optimizer.abstractoptimizer/update [actor,actoroptimizer] = update( ... actoroptimizer, ... actor, ... actorgradient); % 8. update the training visualization. episodecumulativereward = sum(episodereward); episodecumulativerewardvector = cat(2,... episodecumulativerewardvector,episodecumulativereward); movingavgreward = movmean(episodecumulativerewardvector,... avgwindowsize,2); addpoints(linereward,episodect,episodecumulativereward); addpoints(lineavereward,episodect,movingavgreward(end)); drawnow; % 9. terminate training if the network is sufficiently trained. if max(movingavgreward) > trainingterminationvalue break end end
simulation
after training, simulate the trained policy.
before simulation, reset the environment.
obs = reset(env);
enable the environment visualization, which is updated each time the environment step function is called.
plot(env)
for each simulation step, perform the following actions.
get the action by sampling from the policy using the
getaction
function.step the environment using the obtained action value.
terminate if a terminal condition is reached.
for stepct = 1:maxstepsperepisode % select action according to trained policy action = getaction(actor,{obs}); % step the environment [nextobs,reward,isdone] = step(env,action{1}); % check for terminal condition if isdone break end obs = nextobs; end
functions for custom training
to obtain actions and value functions for given observations from reinforcement learning toolbox policy and value function approximators, you can use the following functions.
— obtain the estimated state value or state-action value function.
— obtain the action from an actor based on the current observation.
— obtain the estimated maximum state-action value function for a discrete q-value approximator.
if your policy or value function approximator is a recurrent neural network, that is, a neural network with at least one layer that has hidden state information, the preceding functions can return the current network state. you can use the following function syntaxes to get and set the state of your approximator.
state = getstate(critic)
— obtain the state of approximatorcritic
.newcritic = setstate(oldcritic,state)
— set the state of approximatornewcritic
, and return the result inoldcritic
.newcritic = resetstate(oldcritic)
— reset all state values ofoldcritic
to zero and return the result innewcritic
.
you can get and set the learnable parameters of your approximator using the and function, respectively.
in addition to these functions, you can use the gradient
, optimize
, and syncparameters
functions to set parameters and compute gradients for your policy and value function approximators.
gradient
the function computes the gradients of the approximator loss function. you can compute several different gradients. for example, to compute the gradient of the sum of the approximator outputs with respect to its inputs, use the following syntax.
grad = gradient(actor,"output-input",inputdata)
here:
actor
is a policy or value function approximator object.inputdata
contains values for the input channels to the approximator (e.g. an observation).grad
contains the computed gradients.
for more information, see .
syncparameters
the syncparameters
function updates the learnable parameters of one policy or value function approximator based on those of another approximator. this function is useful for updating a target actor or critic approximator, as is done for ddpg agents. to synchronize parameters values between two approximators, use the following syntax.
newtargetapproximator = syncparameters( oldtargetapproximator, ... sourceapproximator, ... smoothfactor)
here:
oldtargetapproximator
is a policy or value function approximator object with parameters .sourceapproximator
is a policy or value function approximator object with the same structure asoldtargetrep
, but with parameters .smoothfactor
is a smoothing factor () for the update.newtargetapproximator
has the same structure asoldrep
, but its parameters are .
for more information, at the matlab command line, type help rl.function.abstractfunction.syncparameters
loss function
the loss function in the reinforce algorithm the product between the discounted reward and the logarithm of the probability distribution of the action (coming from the policy evaluation for a given observation), summed across all time steps. the discounted reward calculated in the custom training loop must be resized to be multiplied with the logarithm of the action probability distribution.
the function first input parameter must be a cell array like the one returned from the evaluation of a function approximator object. for more information, see the description of outdata
in evaluate
. the second, optional, input argument contains additional data that might be needed by the loss calculation function. for more information, see .
function loss = actorlossfunction(actprobcell,lossfcnstruct) % extract the matrix resulting from the policy evaluation actprob = actprobcell{1}; % create the action indication matrix. batchsize = lossfcnstruct.batchsize; z = repmat(lossfcnstruct.actinfo.elements',1,batchsize); actionindicationmatrix = (lossfcnstruct.actionbatch(:,:)==z); % resize the discounted return to the size of actprob. g = actionindicationmatrix .* lossfcnstruct.discountedreturn; g = reshape(g,size(actprob)); % round any action probability values less than eps to eps. actprob(actprob < eps) = eps; % compute the loss. loss = -sum(g .* log(actprob),"all"); end
helper function
the following helper function creates a figure for training visualization.
function [trainingplt, linerewd, lineavgrwd] = hbuildfigure() plotratio = 16/9; trainingplt = figure(... visible="off",... handlevisibility="off", ... numbertitle="off",... name="cart pole custom training"); trainingplt.position(3) = ... plotratio * trainingplt.position(4); ax = gca(trainingplt); linerewd = animatedline(ax); lineavgrwd = animatedline(ax,color="r",linewidth=3); xlabel(ax,"episode"); ylabel(ax,"reward"); legend(ax,"cumulative reward","average reward", ... location="northwest") title(ax,"training progress"); end
see also
functions
accelerate
| |evaluate
|rloptimizer
| |
objects
related examples
- custom training loop with simulink action noise
- create and train custom lqr agent
- create and train custom pg agent