main content

create custom reinforcement learning environment using your reset and step functions -凯发k8网页登录

create custom reinforcement learning environment using your reset and step functions

since r2019a

description

use rlfunctionenv to create a custom reinforcement learning environment by supplying your own reset and step matlab® functions. this object is useful when you want to create an environment different from the built-in ones available with rlpredefinedenv. to verify the operation of your environment, rlfunctionenv automatically calls validateenvironment after creating the environment.

creation

description

example

env = rlfunctionenv(observationinfo,actioninfo,stepfcn,resetfcn) creates a reinforcement learning environment using the provided observation and action specifications, observationinfo and actioninfo, respectively. the stepfcn and resetfcn arguments are the names of your step and reset matlab functions, respectively, and they are used to set the stepfcn and resetfcn properties of env.

input arguments

observation specifications, specified as an rlfinitesetspec or rlnumericspec object or an array containing a mix of such objects. each element in the array defines the properties of an environment observation channel, such as its dimensions, data type, and name.

action specifications, specified as either an rlfinitesetspec (for discrete action spaces) or rlnumericspec (for continuous action spaces) object. this object defines the properties of the environment action channel, such as its dimensions, data type, and name.

note

only one action channel is allowed.

properties

environment step function, specified as a function name, function handle, or handle to an anonymous function. the sim and train functions call stepfcn to update the environment at every simulation or training step.

this function must have two inputs and four outputs, as illustrated by the following signature.

[nextobservation,reward,isdone,updatedinfo] = mystepfunction(action,info)

for a given action input, the step function returns the values of the next observation and reward, a logical value indicating whether the episode is terminated, and an updated environment information variable.

specifically, the required input and output arguments are described as follows.

  • action — current action from the agent, which must match the dimensions and data type specified in actioninfo.

  • info — any data that you want to pass from one step to the next. this can be the environment state or a structure containing state and parameters. the simulation or training functions (train or sim) handle this variable by:

    1. initializing info using the second output argument returned by resetfcn, at the beginning of the episode

    2. passing info as second input argument to stepfcn at each training or simulation step

    3. updating info using the fourth output argument returned by stepfcn, updatedinfo

  • nextobservation — next observation. this is the observation generated by the transition, caused by action, from the current state to the next one. the returned value must match the dimensions and data types specified in observationinfo.

  • reward — reward generated by the transition, caused by action, from the current state to the next one. the returned value must be a scalar.

  • isdone — logical value indicating whether to end the simulation or training episode.

to use additional input arguments beyond the allowed two, define your additional arguments in the matlab workspace, then specify stepfcn as an anonymous function that in turn calls your custom function with the additional arguments defined in the workspace, as shown in the example create custom environment using step and reset functions.

example: stepfcn="mystepfcn"

environment reset function, specified as a function name, function handle, or handle to an anonymous function. the sim function calls your reset function to reset the environment at the start of each simulation, and the train function calls it at the start of each training episode.

the reset function that you provide must have no inputs and two outputs, as illustrated by the following signature.

[initialobservation,info] = myresetfunction

the reset function sets the environment to an initial state and computes the initial value of the observation. for example, you can create a reset function that randomizes certain state values, such that each training episode begins from different initial conditions. the initialobservation output must match the dimensions and data type of observationinfo.

the info output of resetfcn initializes the info property of your environment and contains any data that you want to pass from one step to the next. this can be the environment state or a structure containing state and parameters. the simulation or training function (train or sim) supplies the current value of info as the second input argument of stepfcn, then uses the fourth output argument returned by stepfcn to update the value of info.

to use additional input arguments beyond the allowed two, define your argument in the matlab workspace, then specify stepfcn as an anonymous function that in turn calls your custom function with the additional arguments defined in the workspace, as shown in the example create custom environment using step and reset functions.

example: resetfcn="myresetfcn"

information to pass to the next step. this can be the environment state or a structure containing state and parameters. when resetfcn is called, whatever you define as the info output of resetfcn initializes this property. when a step occurs the simulation or training function (train or sim) uses the current value of info as the second input argument for stepfcn. once stepfcn completes, the simulation or training function then updates the current value of info using the fourth output argument returned by stepfcn.

example: info=[-1 0 2.2]

object functions

getactioninfoobtain action data specifications from reinforcement learning environment, agent, or experience buffer
getobservationinfoobtain observation data specifications from reinforcement learning environment, agent, or experience buffer
traintrain reinforcement learning agents within a specified environment
simsimulate trained reinforcement learning agents within specified environment
validateenvironmentvalidate custom reinforcement learning environment

examples

create a reinforcement learning environment by supplying custom dynamic functions in matlab®. using rlfunctionenv, you can create a matlab reinforcement learning environment from an observation specification, action specification, and step and reset functions that you define.

for this example, create an environment that represents a system for balancing a cart on a pole. the observations from the environment are the cart position, cart velocity, pendulum angle, and pendulum angular velocity. for additional details about this environment, see create custom environment using step and reset functions. create an observation specification for these signals.

obsinfo = rlnumericspec([4 1]);
obsinfo.name = "cartpole states";
obsinfo.description = 'x, dx, theta, dtheta';

the environment has a discrete action space where the agent can apply one of two possible force values to the cart, –10 n or 10 n. create the action specification for these actions.

actinfo = rlfinitesetspec([-10 10]);
actinfo.name = "cartpole action";

next, specify your step and reset functions. for this example, use the supplied functions myresetfunction.m and mystepfunction.m. for details about these functions and how they are constructed, see create custom environment using step and reset functions.

while the custom reset and step functions that you must pass to rlfuntionenv must have exactly zero and two arguments, respectively, you can avoid this limitation by using anonymous functions. specifically, you define the reset and step functions that you pass to rlfuntionenv as anonymous functions (with zero and two arguments, respectively) that in turn call your custom functions that have additional arguments. for more details on how to do this, see create custom environment using step and reset functions.

create the custom environment using the defined observation specification, action specification, and function names.

env = rlfunctionenv(obsinfo,actinfo,"mystepfunction","myresetfunction")
env = 
  rlfunctionenv with properties:
     stepfcn: "mystepfunction"
    resetfcn: "myresetfunction"
        info: [4x1 double]

you can now create agents for env and train or simulate them as you would for any other environment.

version history

introduced in r2019a
网站地图