train agents using parallel computing and gpus
if you have parallel computing toolbox™ software, you can run parallel simulations on multicore processors or gpus. if you additionally have matlab® parallel server™ software, you can run parallel simulations on computer clusters or cloud resources.
note that parallel training and simulation of agents using recurrent neural networks, or agents within multi-agent environments, is not supported.
independently on which devices you use to simulate or train the agent, once the agent has been trained, you can generate code to deploy the optimal policy on a cpu or gpu. this is explained in more detail in deploy trained reinforcement learning policies.
using multiple processes
when you train agents using parallel computing, the parallel pool client (the matlab process that starts the training) sends copies of both its agent and environment to each parallel worker. each worker simulates the agent within the environment and sends their simulation data back to the client. the client agent learns from the data sent by the workers and sends the updated policy parameters back to the workers.
to create a parallel pool of n
workers, use the following
syntax.
pool = parpool(n);
if you do not create a parallel pool using parpool
(parallel computing toolbox), the train
function automatically creates one
using your default parallel pool preferences. for more information on specifying these
preferences, see specify your parallel preferences (parallel computing toolbox). note that using a parallel pool of thread workers,
such as pool = parpool("threads")
, is not supported.
to train an agent using multiple processes you must pass to the train
function an
rltrainingoptions
object in which the useparallel
property is set to
true
.
for more information on configuring your training to use parallel computing, see the
useparallel
and parallelizationoptions
options
in rltrainingoptions
.
for an example on how to configure options for asynchronous advantage actor-critic (a3c)
agent training, see the last example in rltrainingoptions
.
for an example that trains an agent using parallel computing in matlab, see train ac agent to balance cart-pole system using parallel computing. for an example that trains an agent using parallel computing in simulink®, see train dqn agent for lane keeping assist using parallel computing and train biped robot to walk using reinforcement learning agents.
agent-specific parallel training considerations
reinforcement learning agents can be trained in parallel in two main ways, experience-based parallelization, in which the workers only calculate experiences, and gradient-based parallelization, in which the workers also calculate the gradients that allow the agent approximators to learn.
experience-based parallelization (dqn, ddpg, td3, sac, ppo, trpo)
when training an dqn, ddpg, td3, sac, ppo or trpo agent in parallel, the environment simulation is done by the workers and the gradient computation is done by the client. specifically, the workers simulate (their copy of) the agent within (their copy of) the environment, and send experience data (observation, action, reward, next observation, and a termination signal) to the client. the client then computes the gradients from experiences, updates the agent parameters and sends the updated parameters back to the workers, which then continue to perform simulations using their copy of the updated agent.
this type of parallel training is also known as experience-based parallelization, and
can run using asynchronous training (that is the
mode
property of the rltrainingoptions
object that you pass to the train
function
can be set to "async"
).
with asynchronous training, the client agent calculates gradients and updates agent parameters from the received experiences, without waiting to receive experiences from all the workers. the client then sends the updated parameters back to the worker that provided the experiences. then, while other workers are still running, the worker updates its copy of the agent and continues to generate experiences using its copy of the environment.
experience-based parallelization can also run using synchronous
training (that is the mode
property of the rltrainingoptions
object that you pass to the train
function
must be set to "sync"
).
with synchronous training, the client agent waits to receive experiences from all of the workers and then calculates the gradients from all these experiences. the client updates the agent parameters, and sends the updated parameters to all the workers at the same time. then, all workers use a single updated agent copy, together with their copy of the environment, to generate experiences. since each worker must pause execution until all the workers are finished, synchronous training only advances as fast as the slowest worker allows.
with either synchronous or asynchronous training, experience-based parallelization can reduce training time only when the computational cost of simulating the environment is high compared to the cost of optimizing network parameters. otherwise, when the environment simulation is fast enough, some workers might lie idle waiting for the client to learn and send back the updated parameters.
in other words, experience-based parallelization can improve sample efficiency (intended as the number of samples an agent can process within a given time) only when the ratio r between the environment step complexity and the learning complexity is large. if both environment simulation and gradient computation (that is, learning) are similarly computationally expensive, experience-based parallelization is unlikely to improve sample efficiency. in this case, for off-policy agents that are supported in parallel (dqn, ddpg, td3, and sac) you can reduce the mini-batch size to make r larger, thereby improving sample efficiency.
note
for experience-based parallelization, do not use all of your processor cores for parallel training. for example, if your cpu has six cores, train with four workers. doing so provides more resources for the parallel pool client to compute gradients based on the experiences sent back from the workers.
for and example of experience-based parallel training, see train dqn agent for lane keeping assist using parallel computing.
gradient-based parallelization (ac and pg)
when training an ac or pg agent in parallel, both the environment simulation and gradient computations are done by the workers. specifically, workers simulate (their copy of) the agent within (their copy of) the environment, obtain the experiences, compute the gradients from the experiences, and send the gradients to the client. the client averages the gradients, updates the agent parameters and sends the updated parameters back to the workers so they can continue to perform simulations using an updated copy of the agent.
for pg agents gradient-based parallelization requires synchronous training. for ac agent, you can still chose either synchronous or asynchronous training. the algorithm used to train an ac agent in asynchronous mode is also referred to as asynchronous advantage ac (a3c).
synchronous gradient-based parallelization allows you to achieve, in principle, a speed improvement which is nearly linear in the number of workers. however, since each worker must pause execution until all workers are finished, synchronous training only advances as fast as the slowest worker allows.
in general, limiting the number of workers in order to leave some processor cores for the client is not necessary when using gradient-based parallelization, because the gradients are not computed on the client. therefore, for gradient-based parallelization, it might be beneficial to use all your processor cores for parallel training.
for and example of gradient-based parallel training, see train ac agent to balance cart-pole system using parallel computing.
using gpus
you can speed up training by performing actor and critic operations (such as gradient
computation and prediction), on a local gpu rather than a cpu. to do so, when creating a
critic or actor, set its usedevice
option to "gpu"
instead of "cpu"
.
the "gpu"
option requires both parallel computing toolbox software and a cuda® enabled nvidia® gpu. for more information on supported gpus see gpu computing requirements (parallel computing toolbox).
you can use gpudevice
(parallel computing toolbox) to query or select a local gpu
device to be used with matlab.
using gpus is likely to be beneficial when you have a deep neural network in the actor or critic which has large batch sizes or needs to perform operations such as multiple convolutional layers on input images.
for an example on how to train an agent using the gpu, see train ddpg agent to swing up and balance pendulum with image observation.
using both multiple processes and gpus
you can also train agents using both multiple processes and a local gpu (previously
selected using gpudevice
(parallel computing toolbox)) at the same time. to do so, first
create a critic or actor approximator object in which the usedevice
option is set to "gpu"
. you can then use the critic and actor to create
an agent, and then train the agent using multiple processes. this is done by creating an
rltrainingoptions
object in which useparallel
is set to true
and
passing it to the train
function.
for gradient-based parallelization, (which must run in synchronous mode) the environment simulation is done by the workers, which also use their local gpu to calculate the gradients and perform a prediction step. the gradients are then sent back to the parallel pool client process which calculates the averages, updates the network parameters and sends them back to the workers so they continue to simulate the agent, with the new parameters, against the environment.
for experience-based parallelization, (which can run in asynchronous mode), the workers simulate the agent against the environment, and send experiences data back to the parallel pool client. the client then uses its local gpu to compute the gradients from the experiences, then updates the network parameters and sends the updated parameters back to the workers, which continue to simulate the agent, with the new parameters, against the environment.
note that when using both parallel processing and gpu to train ppo agents, the workers use their local gpu to compute the advantages, and then send processed experience trajectories (which include advantages, targets and action probabilities) back to the client.
see also
functions
objects
rltrainingoptions
|gpudevice
(parallel computing toolbox)
related examples
- train ac agent to balance cart-pole system using parallel computing
- train dqn agent for lane keeping assist using parallel computing
- train biped robot to walk using reinforcement learning agents
- train ddpg agent to swing up and balance pendulum with image observation
- train reinforcement learning agents