reinforcement learning agents
the goal of reinforcement learning is to train an agent to complete a task within an uncertain environment. at each time interval, the agent receives observations and a reward from the environment and sends an action to the environment. the reward is an immediate measure of how successful the previous action (taken from the previous state) was with respect to completing the task goal.
the agent and the environment interact at each of a sequence of discrete time steps as described in reinforcement learning environments and illustrated in the following figure.
by convention, the observation can be divided into one or more
channels each of which carries a group of single elements all belonging
to either a numeric (infinite and continuous) set or to a finite (discrete) set. each group
can be organized according to any number of dimensions (for example a vector or a matrix).
note that only one channel is allowed for the action, while the reward must be a numeric
scalar. for more information on specification objects for actions and observations, see
rlfinitesetspec
and
rlnumericspec
.
the agent contains two components: a policy and a learning algorithm.
the policy is a mapping from the current environment observation to a probability distribution of the actions to be taken. within an agent, the policy is implemented by a function approximator with tunable parameters and a specific approximation model, such as a deep neural network.
the learning algorithm continuously updates the policy parameters based on the actions, observations, and rewards. the goal of the learning algorithm is to find an optimal policy that maximizes the expected cumulative long-term reward received during the task.
depending on the agent, the learning algorithm operates on one or more parameterized function approximators that learn the policy. approximators can be used in two ways.
critics — for a given observation and action, a critic returns an approximation of the policy value (that is the policy discounted expected cumulative long-term reward).
actor — for a given observation, an actor returns the action that (often) maximizes the policy value.
agents that use only critics to select their actions rely on an indirect policy representation. these agents are also referred to as value-based, and they use an approximator to represent a value function (value as a function of the observation) or q-value function (value as a function of observation and action). in general, these agents work better with discrete action spaces but can become computationally expensive for continuous action spaces.
agents that use only actors to select their actions rely on a direct policy representation. these agents are also referred to as policy-based. the policy can be either deterministic or stochastic. in general, these agents are simpler and can handle continuous action spaces, though the training algorithm can be sensitive to noisy measurement and can converge on local minima.
agents that use both an actor and a critic are referred to as actor-critic agents. in these agents, during training, the actor learns the best action to take using feedback from the critic (instead of using the reward directly). at the same time, the critic learns the value function from the rewards so that it can properly criticize the actor. in general, these agents can handle both discrete and continuous action spaces.
agent objects
reinforcement learning toolbox™ represents agents with matlab® objects. such objects interact with environments using object functions
(methods) such as getaction
, which returns an action as output given an
environment observation.
after you create an agent object for a given environment in the matlab workspace, you can then use both the environment and agent variables as
arguments for the built-in functions train
and sim
, which train or
simulate the agent within the environment, respectively.
the software provides different types of built-in agents. for each of these you can manually configure its approximation objects (such as actor or critic) and models (such as neural networks or custom basis function). for most built-in agents you can also use a default network configuration. alternatively, you can also create your custom agent object.
built-in agents
the following tables summarize the types, action spaces, and used approximators for all the built-in agents provided with reinforcement learning toolbox software.
on-policy agents attempt to evaluate or improve the policy that they are using to make decisions, whereas off-policy agents evaluate or improve a policy that can be different from the one that they are using to make decisions, (or from the one that has been used to generate data). for each agent, the observation space can be discrete, continuous or mixed. for more information, see [1].
on-policy built-in agents: type and action space
agent | type | action space |
---|---|---|
value-based | discrete | |
(pg) | policy-based | discrete or continuous |
actor-critic (ac) agents (ac) | actor-critic | discrete or continuous |
(trpo) | actor-critic | discrete or continuous |
(ppo) | actor-critic | discrete or continuous |
off-policy built-in agents: type and action space
agent | type | action space |
---|---|---|
(q) | value-based | discrete |
value-based | discrete | |
actor-critic | continuous | |
twin-delayed deep deterministic (td3) policy gradient agents (td3) | actor-critic | continuous |
(sac) | actor-critic | continuous |
(mbpo) | actor-critic | discrete or continuous |
built-in agents: approximators used by each agent
approximator | q, dqn, sarsa | pg | ac, ppo, trpo | sac | ddpg, td3 |
---|---|---|---|---|---|
value function critic v(s), which you can create using | x (if baseline is used) | x | |||
q-value function critic q(s,a), which you can create using | x | x | x | ||
multi-output q-value function critic q(s), for discrete action spaces, which you can create using | x | ||||
deterministic policy actor π(s), which you can create using | x | ||||
stochastic (multinoulli) policy actor π(s), for discrete action spaces, which you can create using | x | x | |||
stochastic (gaussian) policy actor π(s), for continuous action spaces, which you can create using | x | x | x |
agent with default networks — all agents except q-learning and sarsa agents support default networks for actors and critics. you can create an agent with a default actor and critic based on the observation and action specifications from the environment. to do so, at the matlab command line, perform the following steps.
create observation specifications for your environment. if you already have an environment interface object, you can obtain these specifications using
getobservationinfo
.create action specifications for your environment. if you already have an environment interface object, you can obtain these specifications using
getactioninfo
.if needed, specify the number of neurons in each learnable layer or whether to use an lstm layer. to do so, create an agent initialization option object using .
if needed, specify agent options by creating an options object set for the specific agent. this option object in turn includes
rloptimizeroptions
objects that specify optimization objects for the agent actor or critic.create the agent using the corresponding agent creation function. the resulting agent contains the appropriate actor and critics listed in the table above. the actor and critic use default agent-specific deep neural networks as internal approximators.
for more information on creating actor and critic function approximators, see create policies and value functions.
you can use the reinforcement learning designer app to import an existing environment and interactively design dqn, ddpg, ppo, or td3 agents. the app allows you to train and simulate the agent within your environment, analyze the simulation results, refine the agent parameters, and export the agent to the matlab workspace for further use and deployment. for more information, see .
choosing agent type
when choosing an agent, a best practice is to start with a simpler (and faster to train) algorithm that is compatible with your action and observation spaces. you can then try progressively more complicated algorithms if the simpler ones do not perform as desired.
discrete action space — for environments with a discrete action spaces, the q-learning and sarsa agents are the simplest compatible agent, followed by dqn, ppo, and trpo. if your observation space is continuous you cannot use a table as approximation model.
continuous action space — for environments with a continuous action space, ddpg is the simplest compatible agent, followed by td3, ppo, and sac, which are then followed by trpo. for such environments, try ddpg first. in general:
td3 is an improved, more complex version of ddpg.
ppo has more stable updates but requires more training.
sac is an improved, more complex version of ddpg that generates stochastic policies.
trpo is a more complex version of ppo that is more robust for deterministic environments with fewer observations.
model-based policy optimization
if you are using an off-policy agent (dqn, ddpg, td3, sac), you can consider using model-based policy optimization (mbpo) agent. to improve your training sample efficiency. an mbpo agent contains an internal model of the environment, which it uses to generate additional experiences without interacting with the environment.
during training, the mbpo agent generates real experiences by interacting with the environment. these experiences are used to train the internal environment model, which is used to generate additional experiences. the training algorithm then uses both the real and generated experiences to update the agent policy.
an mbpo agent can be more sample efficient than model-free agents because the model can generate large sets of diverse experiences. however, mbpo agents require much more computational time than model-free agents, because they must train the environment model and generate samples in addition to training the base agent.
for more information, see .
extract policy objects from agents
you can extract a policy object from an agent and then use to generate deterministic or stochastic actions from the policy, given an input observation. working with policy objects can be useful for application deployment or custom training purposes. for more information, see create policies and value functions.
custom agents
you can also train policies using other learning algorithms by creating a custom agent.
creating a custom agent allows you to use built-in functions train
and sim
, which can
train or simulate your agent. to do so, you create a subclass of a custom agent class, and
define the agent behavior using a set of required and optional methods. for more
information, see .
alternatively, to implement a custom learning algorithm which does not rely on
train
or sim
, you can create a custom training
loop. for more information about custom training loops, see train reinforcement learning policy using custom training loop.
references
[1] sutton, richard s., and andrew g. barto. reinforcement learning: an introduction. second edition. adaptive computation and machine learning. cambridge, mass: the mit press, 2018.
see also
objects
- | | | | |
rltd3agent
|rlacagent
| | | |