train reinforcement learning agents within a specified environment -凯发k8网页登录
train reinforcement learning agents within a specified environment
since r2019a
syntax
description
trains one or more reinforcement learning agents within the environment
trainstats
= train(env
,agents
)env
, using default training options, and returns
training results in trainstats
. although
agents
is an input argument, after each training
episode, train
updates the parameters of each agent
specified in agents
to maximize their expected long-term
reward from the environment. this is possible because each agent is an handle
object. when training terminates, agents
reflects the state
of each agent at the end of the final training episode.
note
to train an off-policy agent offline using existing data, use
trainfromdata
.
performs the same training as the previous syntax.trainstats
= train(agents
,env
)
trains trainstats
= train(___,trainopts
)agents
within env
, using the
training options object trainopts
. use training options to
specify training parameters such as the criteria for terminating training, when
to save agents, the maximum number of episodes to train, and the maximum number
of steps per episode.
resumes training from the last values of the agent parameters and training
results contained in trainstats
= train(___,prevtrainstats
)prevtrainstats
, which is returned by
the previous function call to train
.
train agents with additional name-value arguments. use this syntax to specify a
logger or evaluator object to be used in training. logger and evaluator objects
allow you to periodically log results to disk and to evaluate agents,
respectively.trainstats
= train(___,name=value
)
examples
input arguments
output arguments
tips
train
updates the agents as training progresses. to preserve the original agent parameters for later use, save the agents to a mat-file.by default, calling
train
opens the reinforcement learning episode manager, which lets you visualize the progress of the training. the episode manager plot shows the reward for each episode, a running average reward value, and the critic estimate q0 (for agents that have critics). the episode manager also displays various episode and training statistics. to turn off the reinforcement learning episode manager, set theplots
option oftrainopts
to"none"
.if you use a predefined environment for which there is a visualization, you can use
plot(env)
to visualize the environment. if you callplot(env)
before training, then the visualization updates during training to allow you to visualize the progress of each episode. (for custom environments, you must implement your ownplot
method.)training terminates when the conditions specified in
trainopts
are satisfied. to terminate training in progress, in the reinforcement learning episode manager, click stop training. becausetrain
updates the agent at each episode, you can resume training by callingtrain(agent,env,trainopts)
again, without losing the trained parameters learned during the first call totrain
.during training, you can save candidate agents that meet conditions you specify with
trainopts
. for instance, you can save any agent whose episode reward exceeds a certain value, even if the overall condition for terminating training is not yet satisfied.train
stores saved agents in a mat-file in the folder you specify withtrainopts
. saved agents can be useful, for instance, to allow you to test candidate agents generated during a long-running training process. for details about saving criteria and saving location, seerltrainingoptions
.
algorithms
in general, train
performs the following iterative steps:
initialize
agent
.for each episode:
reset the environment.
get the initial observation s0 from the environment.
compute the initial action a0 = μ(s0).
set the current action to the initial action (a←a0) and set the current observation to the initial observation (s←s0).
while the episode is not finished or terminated:
step the environment with action a to obtain the next observation s' and the reward r.
learn from the experience set (s,a,r,s').
compute the next action a' = μ(s').
update the current action with the next action (a←a') and update the current observation with the next observation (s←s').
break if the episode termination conditions defined in the environment are met.
if the training termination condition defined by
trainopts
is met, terminate training. otherwise, begin the next episode.
the specifics of how train
performs these computations depends on
your configuration of the agent and environment. for instance, resetting the environment
at the start of each episode can include randomizing initial state values, if you
configure your environment to do so.
extended capabilities
version history
introduced in r2019asee also
functions
trainwithevolutionstrategy
|trainfromdata
|inspecttrainingresult
|rldatalogger
|rldataviewer
|sim