main content

actor-凯发k8网页登录

actor-critic (ac) reinforcement learning agent

since r2019a

description

actor-critic (ac) agents implement actor-critic algorithms such as a2c and a3c, which are model-free, online, on-policy reinforcement learning methods. the actor-critic agent optimizes the policy (actor) directly and uses a critic to estimate the expected discounted cumulative long-term reward. the action space can be either discrete or continuous. for continuous action spaces, this agent does not enforce constraints set in the action specification; therefore, if you need to enforce action constraints, you must do so within the environment.

for more information, see actor-critic (ac) agents. for more information on the different types of reinforcement learning agents, see reinforcement learning agents.

creation

description

create agent from observation and action specifications

example

agent = rlacagent(observationinfo,actioninfo) creates an actor-critic agent for an environment with the given observation and action specifications, using default initialization options. the actor and critic in the agent use default deep neural networks built from the observation specification observationinfo and the action specification actioninfo. the observationinfo and actioninfo properties of agent are set to the observationinfo and actioninfo input arguments, respectively.

example

agent = rlacagent(observationinfo,actioninfo,initopts) creates an actor-critic agent for an environment with the given observation and action specifications. the agent uses default networks in which each hidden fully connected layer has the number of units specified in the initopts object. for more information on the initialization options, see .

create agent from actor and critic

example

agent = rlacagent(actor,critic) creates an actor-critic agent with the specified actor and critic, using the default options for the agent.

specify agent options

example

agent = rlacagent(___,agentoptions) creates an actor-critic agent and sets the agentoptions property to the agentoptions input argument. use this syntax after any of the input arguments in the previous syntaxes.

input arguments

agent initialization options, specified as an object.

actor that implements the policy, specified as an or function approximator object. for more information on creating actor approximators, see create policies and value functions.

critic that estimates the discounted long-term reward, specified as an object. for more information on creating critic approximators, see create policies and value functions.

properties

observation specifications, specified as an rlfinitesetspec or rlnumericspec object or an array containing a mix of such objects. each element in the array defines the properties of an environment observation channel, such as its dimensions, data type, and name.

if you create the agent by specifying an actor and critic, the value of observationinfo matches the value specified in the actor and critic objects.

you can extract observationinfo from an existing environment or agent using getobservationinfo. you can also construct the specifications manually using rlfinitesetspec or rlnumericspec.

action specifications, specified either as an rlfinitesetspec (for discrete action spaces) or rlnumericspec (for continuous action spaces) object. this object defines the properties of the environment action channel, such as its dimensions, data type, and name.

note

only one action channel is allowed.

if you create the agent by specifying an actor and critic, the value of actioninfo matches the value specified in the actor and critic objects.

you can extract actioninfo from an existing environment or agent using getactioninfo. you can also construct the specification manually using rlfinitesetspec or rlnumericspec.

agent options, specified as an rlacagentoptions object.

option to use exploration policy when selecting actions during simulation or after deployment, specified as a one of the following logical values.

  • true — use the base agent exploration policy when selecting actions in sim and . specifically, in this case the agent uses the policy with the usemaxlikelihoodaction property set to false. since the agent selects its actions by sampling its probability distribution, the policy is stochastic and the agent explores its action and observation spaces.

  • false — force the agent to use the base agent greedy policy (the action with maximum likelihood) when selecting actions in sim and . specifically, in this case the agent uses the policy with the usemaxlikelihoodaction property set to true. since the agent selects its actions greedily the policy behaves deterministically and the agent does not explore its action and observation spaces.

note

this option affects only simulation and deployment; it does not affect training. when you train an agent using train, the agent always uses its exploration policy independently of the value of this property.

sample time of agent, specified as a positive scalar or as -1. setting this parameter to -1 allows for event-based simulations.

within a simulink® environment, the rl agent block in which the agent is specified to execute every sampletime seconds of simulation time. if sampletime is -1, the block inherits the sample time from its parent subsystem.

within a matlab® environment, the agent is executed every time the environment advances. in this case, sampletime is the time interval between consecutive elements in the output experience returned by sim or train. if sampletime is -1, the time interval between consecutive elements in the returned output experience reflects the timing of the event that triggers the agent execution.

example: sampletime=-1

object functions

traintrain reinforcement learning agents within a specified environment
simsimulate trained reinforcement learning agents within specified environment
obtain action from agent, actor, or policy object given environment observations
extract actor from reinforcement learning agent
set actor of reinforcement learning agent
extract critic from reinforcement learning agent
set critic of reinforcement learning agent
generate matlab function that evaluates policy of an agent or policy object

examples

create an environment with a discrete action space, and obtain its observation and action specifications. for this example, load the environment used in the example create dqn agent using deep network designer and train using image observations. this environment has two observations: a 50-by-50 grayscale image and a scalar (the angular velocity of the pendulum). the action is a scalar with five possible elements (a torque of -2, -1, 0, 1, or 2 nm applied to a swinging pole).

env = rlpredefinedenv("simplependulumwithimage-discrete");

obtain observation and action specifications

obsinfo = getobservationinfo(env);
actinfo = getactioninfo(env);

the agent creation function initializes the actor and critic networks randomly. ensure reproducibility by fixing the seed of the random generator.

rng(0)

create an actor-critic agent from the environment observation and action specifications.

agent = rlacagent(obsinfo,actinfo);

to check your agent, use getaction to return the action from random observations.

getaction(agent,{rand(obsinfo(1).dimension),rand(obsinfo(2).dimension)})
ans = 1x1 cell array
    {[-2]}

you can now test and train the agent within the environment. you can also use and to extract the actor and critic, respectively, and to extract the approximator model (by default a deep neural network) from the actor or critic.

create an environment with a continuous action space and obtain its observation and action specifications. for this example, load the environment used in the example train ddpg agent to swing up and balance pendulum with image observation. this environment has two observations: a 50-by-50 grayscale image and a scalar (the angular velocity of the pendulum). the action is a scalar representing a torque ranging continuously from -2 to 2 nm.

% load predefined environment
env = rlpredefinedenv("simplependulumwithimage-continuous");
% obtain observation and action specifications
obsinfo = getobservationinfo(env);
actinfo = getactioninfo(env);

create an agent initialization option object, specifying that each hidden fully connected layer in the network must have 128 neurons (instead of the default number, 256).

initopts = rlagentinitializationoptions(numhiddenunit=128);

the agent creation function initializes the actor and critic networks randomly. you can ensure reproducibility by fixing the seed of the random generator.

rng(0)

create an actor-critic agent from the environment observation and action specifications.

agent = rlacagent(obsinfo,actinfo,initopts);

extract the deep neural networks from both the agent actor and critic.

actornet = getmodel(getactor(agent));
criticnet = getmodel(getcritic(agent));

to verify that each hidden fully connected layer has 128 neurons, you can display the layers on the matlab® command window,

criticnet.layers

or visualize the structure interactively using analyzenetwork.

analyzenetwork(criticnet)

plot actor and critic networks

plot(layergraph(actornet))

figure contains an axes object. the axes object contains an object of type graphplot.

plot(layergraph(criticnet))

figure contains an axes object. the axes object contains an object of type graphplot.

to check your agent, use getaction to return the action from a random observation.

getaction(agent,{rand(obsinfo(1).dimension),rand(obsinfo(2).dimension)})
ans = 1x1 cell array
    {[0.9228]}

you can now test and train the agent within the environment.

create an environment with a discrete action space and obtain its observation and action specifications. for this example, load the environment used in the example train dqn agent to balance cart-pole system. this environment has a four-dimensional observation vector (cart position and velocity, pole angle, and pole angle derivative), and a scalar action with two possible elements (a force of either -10 or 10 n applied on the cart).

env = rlpredefinedenv("cartpole-discrete");

obtain observation and action specifications.

obsinfo = getobservationinfo(env)
obsinfo = 
  rlnumericspec with properties:
     lowerlimit: -inf
     upperlimit: inf
           name: "cartpole states"
    description: "x, dx, theta, dtheta"
      dimension: [4 1]
       datatype: "double"
actinfo = getactioninfo(env)
actinfo = 
  rlfinitesetspec with properties:
       elements: [-10 10]
           name: "cartpole action"
    description: [0x0 string]
      dimension: [1 1]
       datatype: "double"

the agent creation function initializes the actor and critic networks randomly. you can ensure reproducibility by fixing the seed of the random generator.

rng(0)

actor-critic agents use a parametrized value function approximator to estimate the value of the policy. a value-function critic takes the current observation as input and returns a single scalar as output (the estimated discounted cumulative long-term reward for following the policy from the state corresponding to the current observation).

to model the parametrized value function within the critic, use a neural network with one input layer (which receives the content of the observation channel, as specified by obsinfo) and one output layer (which returns the scalar value). note that prod(obsinfo.dimension) returns the total number of dimensions of the observation space regardless of whether the observation space is a column vector, row vector, or matrix.

define the network as an array of layer objects, and get the dimension of the observation space from the environment specification objects.

criticnet = [
    featureinputlayer(prod(obsinfo.dimension))
    fullyconnectedlayer(50)
    relulayer
    fullyconnectedlayer(1)
    ];

convert the network to a dlnetwork object, and display the number of weights.

criticnet = dlnetwork(criticnet);
summary(criticnet)
   initialized: true
   number of learnables: 301
   inputs:
      1   'input'   4 features

create the critic approximator object using criticnet and the observation specification. for more information on value function approximators, see .

critic = rlvaluefunction(criticnet,obsinfo);

check your critic with a random observation input.

getvalue(critic,{rand(obsinfo.dimension)})
ans = single
    -0.1411

actor-critic agents use a parametrized stochastic policy, which for discrete action spaces is implemented by a discrete categorical actor. this actor takes an observation as input and returns as output a random action sampled (among the finite number of possible actions) from a categorical probability distribution.

to model the parametrized policy within the actor, use a neural network with one input layer (which receives the content of the observation channel, as specified by obsinfo) and one output layer. the output layer must return a vector of probabilities for each possible action, as specified by actinfo. note that numel(actinfo.dimension) returns the number of elements of the discrete action space.

define the network as an array of layer objects.

actornet = [
    featureinputlayer(prod(obsinfo.dimension))
    fullyconnectedlayer(16)
    relulayer
    fullyconnectedlayer(16)
    relulayer
    fullyconnectedlayer(numel(actinfo.dimension))
    ];

convert the network to a dlnetwork object, and display the number of weights.

actornet = dlnetwork(actornet);
summary(actornet)
   initialized: true
   number of learnables: 386
   inputs:
      1   'input'   4 features

create the actor using actornet and the observation and action specifications. for more information on discrete categorical actors, see .

actor = rldiscretecategoricalactor(actornet,obsinfo,actinfo);

check your actor with a random observation input.

getaction(actor,{rand(obsinfo.dimension)})
ans = 1x1 cell array
    {[10]}

create the ac agent using the actor and the critic.

agent = rlacagent(actor,critic)
agent = 
  rlacagent with properties:
            agentoptions: [1x1 rl.option.rlacagentoptions]
    useexplorationpolicy: 1
         observationinfo: [1x1 rl.util.rlnumericspec]
              actioninfo: [1x1 rl.util.rlfinitesetspec]
              sampletime: 1

specify some options for the agent, including training options for the actor and critic.

agent.agentoptions.numstepstolookahead=32;
agent.agentoptions.discountfactor=0.99;
agent.agentoptions.criticoptimizeroptions.learnrate=8e-3;
agent.agentoptions.criticoptimizeroptions.gradientthreshold=1;
agent.agentoptions.actoroptimizeroptions.learnrate=8e-3;
agent.agentoptions.actoroptimizeroptions.gradientthreshold=1;

check your agent with a random observation.

getaction(agent,{rand(obsinfo.dimension)})
ans = 1x1 cell array
    {[-10]}

you can now test and train the agent within the environment.

create an environment with a continuous action space, and obtain its observation and action specifications. for this example, load the double integrator continuous action space environment used in the example compare ddpg agent to lqr controller.

env = rlpredefinedenv("doubleintegrator-continuous");

obtain observation and action specifications.

obsinfo = getobservationinfo(env)
obsinfo = 
  rlnumericspec with properties:
     lowerlimit: -inf
     upperlimit: inf
           name: "states"
    description: "x, dx"
      dimension: [2 1]
       datatype: "double"
actinfo = getactioninfo(env)
actinfo = 
  rlnumericspec with properties:
     lowerlimit: -inf
     upperlimit: inf
           name: "force"
    description: [0x0 string]
      dimension: [1 1]
       datatype: "double"

in this example, the action is a scalar value representing a force ranging from -2 to 2 newton. to make sure that the output from the agent is in this range, you perform an appropriate scaling operation. store these limits so you can easily access them later.

% make sure action space upper and lower limits are finite
actinfo.lowerlimit=-2;
actinfo.upperlimit=2;

the actor and critic networks are initialized randomly. you can ensure reproducibility by fixing the seed of the random generator.

rng(0)

actor-critic agents use a parametrized value function approximator to estimate the value of the policy. a value-function critic takes the current observation as input and returns a single scalar as output (the estimated discounted cumulative long-term reward for following the policy from the state corresponding to the current observation).

to model the parametrized value function within the critic, use a neural network with one input layer (which receives the content of the observation channel, as specified by obsinfo) and one output layer (which returns the scalar value). note that prod(obsinfo.dimension) returns the total number of dimensions of the observation space regardless of whether the observation space is a column vector, row vector, or matrix.

define the network as an array of layer objects.

criticnet = [
    featureinputlayer(prod(obsinfo.dimension))
    fullyconnectedlayer(50)
    relulayer
    fullyconnectedlayer(1)
    ];

convert the network to a dlnetwork object and display the number of weights.

criticnet = dlnetwork(criticnet);
summary(criticnet)
   initialized: true
   number of learnables: 201
   inputs:
      1   'input'   2 features

create the critic approximator object using criticnet and the observation specification. for more information on value function approximators, see .

critic = rlvaluefunction(criticnet,obsinfo);

check your critic with a random input observation.

getvalue(critic,{rand(obsinfo.dimension)})
ans = single
    -0.0969

actor-critic agents use a parametrized stochastic policy, which for continuous action spaces is implemented by a continuous gaussian actor. this actor takes an observation as input and returns as output a random action sampled from a gaussian probability distribution.

to approximate the mean values and standard deviations of the gaussian distribution, you must use a neural network with two output layers, each having as many elements as the dimension of the action space. one output layer must return a vector containing the mean values for each action dimension. the other must return a vector containing the standard deviation for each action dimension.

note that standard deviations must be nonnegative and mean values must fall within the range of the action. therefore the output layer that returns the standard deviations must be a softplus or relu layer, to enforce nonnegativity, while the output layer that returns the mean values must be a scaling layer, to scale the mean values to the output range.

for this example the environment has only one observation channel and therefore the network has only one input layer.

define each network path as an array of layer objects, and assign names to the input and output layers of each path. these names allow you to connect the paths and then later explicitly associate the network input and output layers with the appropriate environment channel.

% input path
inpath = [ 
    featureinputlayer(prod(obsinfo.dimension),name="netobsin")
    fullyconnectedlayer(prod(actinfo.dimension),name="infc") 
    ];
% mean value path
meanpath = [ 
    tanhlayer(name="tanhmean");
    fullyconnectedlayer(50)
    relulayer
    fullyconnectedlayer(prod(actinfo.dimension));
    scalinglayer( ...
    name="netmout", ...
    scale=actinfo.upperlimit)  % scale to range
    ];
% standard deviation path
sdevpath = [ 
    tanhlayer(name="tanhstdv");
    fullyconnectedlayer(50)
    relulayer
    fullyconnectedlayer(prod(actinfo.dimension));
    softpluslayer(name="netsdout")  % nonnegative
    ];
% add layers to network object
actornet = layergraph;
actornet = addlayers(actornet,inpath);
actornet = addlayers(actornet,meanpath);
actornet = addlayers(actornet,sdevpath);
% connect layers
actornet = connectlayers(actornet,"infc","tanhmean/in");
actornet = connectlayers(actornet,"infc","tanhstdv/in");
% plot network
plot(actornet)

figure contains an axes object. the axes object contains an object of type graphplot.

convert the network to a dlnetwork object and display the number of learnable parameters (weights).

actornet = dlnetwork(actornet);
summary(actornet)
   initialized: true
   number of learnables: 305
   inputs:
      1   'netobsin'   2 features

create the actor approximator object using actornet and the environment specifications. for more information, on continuous gaussian actors, see .

actor = rlcontinuousgaussianactor(actornet, obsinfo, actinfo, ...
    actionmeanoutputnames="netmout",...
    actionstandarddeviationoutputnames="netsdout",...
    observationinputnames="netobsin");

check your actor with a random input observation.

getaction(actor,{rand(obsinfo.dimension)})
ans = 1x1 cell array
    {[-1.2332]}

create the ac agent using the actor and the critic.

agent = rlacagent(actor,critic)
agent = 
  rlacagent with properties:
            agentoptions: [1x1 rl.option.rlacagentoptions]
    useexplorationpolicy: 1
         observationinfo: [1x1 rl.util.rlnumericspec]
              actioninfo: [1x1 rl.util.rlnumericspec]
              sampletime: 1

specify agent options, including training options for its actor and critic.

agent.agentoptions.numstepstolookahead = 32;
agent.agentoptions.discountfactor=0.99;
agent.agentoptions.criticoptimizeroptions.learnrate=8e-3;
agent.agentoptions.criticoptimizeroptions.gradientthreshold=1;
agent.agentoptions.actoroptimizeroptions.learnrate=8e-3;
agent.agentoptions.actoroptimizeroptions.gradientthreshold=1;

check your agent using a random input observation.

getaction(agent,{rand(obsinfo.dimension)})
ans = 1x1 cell array
    {[-1.5401]}

you can now test and train the agent within the environment.

for this example load the predefined environment used for the train dqn agent to balance cart-pole system example. this environment has a four-dimensional observation vector (cart position and velocity, pole angle, and pole angle derivative), and a scalar action with two possible elements (a force of either -10 or 10 n applied on the cart).

env = rlpredefinedenv("cartpole-discrete");

get observation and action information.

obsinfo = getobservationinfo(env);
actinfo = getactioninfo(env);

the agent creation function initializes the actor and critic networks randomly. ensure reproducibility by fixing the seed of the random generator.

rng(0)

to model the parametrized value function within the critic, use a recurrent neural network, which must have one input layer (which receives the content of the observation channel, as specified by obsinfo) and one output layer (which returns the scalar value).

define the network as an array of layer objects. to create a recurrent network, use a sequenceinputlayer as the input layer and include at least one lstmlayer.

criticnet = [
    sequenceinputlayer(prod(obsinfo.dimension))
    lstmlayer(10)
    relulayer
    fullyconnectedlayer(1)
    ];

convert the network to a dlnetwork object and display the number of learnable parameters (weights).

criticnet = dlnetwork(criticnet);
summary(criticnet)
   initialized: true
   number of learnables: 611
   inputs:
      1   'sequenceinput'   sequence input with 4 dimensions

create the critic approximator object using criticnet and the observation specification. for more information on value function approximators, see .

critic = rlvaluefunction(criticnet,obsinfo);

check the critic with a random input observation.

getvalue(critic,{rand(obsinfo.dimension)})
ans = single
    -0.0344

since the critic has a recurrent network, the (discrete categorical) actor must also use a recurrent network. the network must have one input layer (which receives the content of the environment observation channel, as specified by obsinfo) and one output layer (which must return a vector of probabilities for each possible action, as specified by actinfo).

define the recurrent network as an array of layer objects.

actornet = [
    sequenceinputlayer(prod(obsinfo.dimension))
    lstmlayer(20)
    relulayer
    fullyconnectedlayer(numel(actinfo.elements))
    ];

convert the network to a dlnetwork object and display the number of weights.

actornet = dlnetwork(actornet);
summary(actornet)
   initialized: true
   number of learnables: 2k
   inputs:
      1   'sequenceinput'   sequence input with 4 dimensions

create the actor using actornet and the observation and action specifications. for more information on discrete categorical actors, see .

actor = rldiscretecategoricalactor(actornet,obsinfo,actinfo);

check the actor with a random input observation.

getaction(actor,{rand(obsinfo.dimension)})
ans = 1x1 cell array
    {[10]}

set some training options for the critic.

criticopts = rloptimizeroptions( ...
    learnrate=8e-3,gradientthreshold=1);

set some training options for the actor.

actoropts = rloptimizeroptions( ...
    learnrate=8e-3,gradientthreshold=1);

specify training options for the agent, and include actoropts and criticopts. since the agent uses recurrent neural networks, numstepstolookahead is treated as the training trajectory length.

agentopts = rlacagentoptions( ...
    numstepstolookahead=32, ...
    discountfactor=0.99, ...
    criticoptimizeroptions=criticopts, ...
    actoroptimizeroptions=actoropts);

create an ac agent using the actor, the critic, and the agent options object.

agent = rlacagent(actor,critic,agentopts)
agent = 
  rlacagent with properties:
            agentoptions: [1x1 rl.option.rlacagentoptions]
    useexplorationpolicy: 1
         observationinfo: [1x1 rl.util.rlnumericspec]
              actioninfo: [1x1 rl.util.rlfinitesetspec]
              sampletime: 1

to check your agent, return the action from a random observation.

getaction(agent,{rand(obsinfo.dimension)})
ans = 1x1 cell array
    {[10]}

to evaluate the agent using sequential observations, use the sequence length (time) dimension. for example, obtain actions for a sequence of 9 observations.

[action,state] = getaction(agent, ...
    {rand([obsinfo.dimension 1 9])});

display the action corresponding to the seventh element of the observation.

action = action{1};
action(1,1,1,7)
ans = -10

you can now test and train the agent within the environment.

to train an agent using the asynchronous advantage actor-critic (a3c) method, you must set the agent and parallel training options appropriately.

when creating the ac agent, set the numstepstolookahead value to be greater than 1. common values are 64 and 128.

agentopts = rlacagentoptions(numstepstolookahead=64);

use agentopts when creating your agent. alternatively, create your agent first and then modify its options, including the actor and critic options later using dot notation.

configure the training algorithm to use asynchronous parallel training.

trainopts = rltrainingoptions(useparallel=true);
trainopts.parallelizationoptions.mode = "async";

you can now use trainopts to train your ac agent using the a3c method.

for an example on asynchronous advantage actor-critic agent training, see train ac agent to balance cart-pole system using parallel computing.

tips

  • for continuous action spaces, the rlacagent object does not enforce the constraints set by the action specification, so you must enforce action space constraints within the environment.

version history

introduced in r2019a

网站地图