specify simulation options in reinforcement learning designer
to configure the simulation of an agent in the reinforcement learning designer app, specify simulation options on the simulate tab.
specify simulation options in reinforcement learning designer
specify basic options
on the simulate tab, you can specify the following basic simulation options.
option | description |
---|---|
number of episodes | number of episodes to simulate the agent, specified as a positive integer. at the start of each simulation episode, the app resets the environment. |
max episode length | number of steps to run the simulation, specified as a positive integer. in general, you define episode termination conditions in the environment. this value is the maximum number of steps to run in the simulation if those termination conditions are not met. |
stop on error | select this option to stop simulation when an error occurs during an episode. |
specify parallel simulation options
to simulate your agent using parallel computing, on the simulate tab, click . simulating agents using parallel computing requires parallel computing toolbox™ software. for more information, see train agents using parallel computing and gpus.
to specify options for parallel simulation, select use parallel > parallel training options.
in the parallel simulation options dialog box, you can specify the following training options.
option | description |
---|---|
transfer workspace variables to workers | select this option to send model and workspace variables to parallel workers. when you select this option, the parallel pool client (the process that starts the training) sends variables used in models and defined in the matlab® workspace to the workers. |
random seed for workers | randomizer initialization for workers, specified as one of the following values.
|
files to attach to parallel pool | additional files to attach to the parallel pool. specify names of files in the current working directory, with one name on each line. |
worker setup function | function to run before simulation starts, specified as the name of a function having no input arguments. this function is run once per worker before simulation begins. write this function to perform any processing that you need prior to training. |
worker cleanup function | function to run after simulation ends, specified as the name of a function having no input arguments. you can write this function to clean up the workspace or perform other processing after training terminates. |
the following figure shows an example parallel training configuration the following files and functions.
data file attached to the parallel pool —
workerdata.mat
worker setup function —
mysetup.m
worker cleanup function —
mycleanup.m
see also
apps
functions
objects
related examples
- design and train agent using reinforcement learning designer
- specify training options in reinforcement learning designer