main content

clean up reinforcement learning environment or data logger object -凯发k8网页登录

clean up reinforcement learning environment or data logger object

since r2022a

description

when you define a custom training loop for reinforcement learning, you can simulate an agent or policy against an environment using the runepisode function. use the cleanup function to clean up the environment after running simulations using multiple calls to runepisode. to clean up the environment after each simulation, you can configure runepisode to automatically call the cleanup function at the end of each episode.

also use cleanup to perform clean up tasks for a filelogger or monitorlogger object after logging data within a custom training loop.

environment objects

example

cleanup(env) cleans up the specified reinforcement learning environment after running multiple simulations using runepisode.

data logger objects

cleanup(lgr) cleans up the specified data logger object after logging data within a custom training loop. this task might involve for example transferring any remaining data from lgr internal memory to a logging target (eitherlog data to disk in a custom training loop a mat-file or a trainingprogressmonitor object).

examples

create a reinforcement learning environment and extract its observation and action specifications.

env = rlpredefinedenv("cartpole-discrete");
obsinfo = getobservationinfo(env);
actinfo = getactioninfo(env);

to approximate the q-value function withing the critic, use a neural network. create a network as an array of layer objects.

net = [...
    featureinputlayer(obsinfo.dimension(1))
    fullyconnectedlayer(24)
    relulayer
    fullyconnectedlayer(24)
    relulayer
    fullyconnectedlayer(2)
    softmaxlayer];

convert the network to a dlnetwork object and display the number of learnable parameters (weights).

net = dlnetwork(net);
summary(net)
   initialized: true
   number of learnables: 770
   inputs:
      1   'input'   4 features

create a discrete categorical actor using the network.

actor = rldiscretecategoricalactor(net,obsinfo,actinfo);

check your actor with a random observation.

act = getaction(actor,{rand(obsinfo.dimension)})
act = 1x1 cell array
    {[-10]}

create a policy object from the actor.

policy = rlstochasticactorpolicy(actor);

create an experience buffer.

buffer = rlreplaymemory(obsinfo,actinfo);

set up the environment for running multiple simulations. for this example, configure the training to log any errors rather than send them to the command window.

setup(env,stoponerror="off")

simulate multiple episodes using the environment and policy. after each episode, append the experiences to the buffer. for this example, run 100 episodes.

for i = 1:100
    output = runepisode(env,policy,maxsteps=300);
    append(buffer,output.agentdata.experiences)
end

clean up the environment.

cleanup(env)

sample a mini-batch of experiences from the buffer. for this example, sample 10 experiences.

batch = sample(buffer,10);

you can then learn from the sampled experiences and update the policy and actor.

this example shows how to log data to disk when training an agent using a custom training loop.

create a filelogger object using rldatalogger.

flgr = rldatalogger();

set up the logger object. this operation initializes the object performing setup tasks such as, for example, creating the directory to save the data files.

setup(flgr);

within a custom training loop, you can now store data to the logger object memory and write data to file.

for this example, store random numbers to the file logger object, grouping them in the variables context1 and context2. when you issue a write command, a mat-file corresponding to an iteration and containing both variables is saved with the name specified in flgr.loggingoptions.filenamerule, in the folder specified by flgr.loggingoptions.loggingdirectory.

for iter = 1:10
    % store three random numbers in memory 
    % as elements of the variable "context1"
    for ct = 1:3
        store(flgr, "context1", rand, iter);
    end
    % store a random number in memory 
    % as the variable "context2"
    store(flgr, "context2", rand, iter);
    % write data to file every 4 iterations
    if mod(iter,4)==0
        write(flgr);
    end
end

clean up the logger object. this operation performs clean up tasks like for example writing to file any data still in memory.

cleanup(flgr);

input arguments

reinforcement learning environment, specified as one of the following objects.

if env is a simulinkenvwithagent object and the associated simulink model is configured to use fast restart, then cleanup terminates the model compilation.

data logger object, specified as either a filelogger or a monitorlogger object.

version history

introduced in r2022a

网站地图