main content

create markov decision process environment for reinforcement learning -凯发k8网页登录

create markov decision process environment for reinforcement learning

since r2019a

description

a markov decision process (mdp) is a discrete time stochastic control process. it provides a mathematical framework for modeling decision making in situations where outcomes are partly random and partly under the control of the decision maker. mdps are useful for studying optimization problems solved using reinforcement learning. use rlmdpenv to create a markov decision process environment for reinforcement learning in matlab®.

creation

description

example

env = rlmdpenv(mdp) creates a reinforcement learning environment env with the specified mdp model.

input arguments

markov decision process model, specified as one of the following:

properties

markov decision process model, specified as a gridworld object or genericmdp object.

reset function, specified as a function handle.

object functions

getactioninfoobtain action data specifications from reinforcement learning environment, agent, or experience buffer
getobservationinfoobtain observation data specifications from reinforcement learning environment, agent, or experience buffer
simsimulate trained reinforcement learning agents within specified environment
traintrain reinforcement learning agents within a specified environment
validateenvironmentvalidate custom reinforcement learning environment

examples

for this example, consider a 5-by-5 grid world with the following rules:

  1. a 5-by-5 grid world bounded by borders, with 4 possible actions (north = 1, south = 2, east = 3, west = 4).

  2. the agent begins from cell [2,1] (second row, first column).

  3. the agent receives reward 10 if it reaches the terminal state at cell [5,5] (blue).

  4. the environment contains a special jump from cell [2,4] to cell [4,4] with 5 reward.

  5. the agent is blocked by obstacles in cells [3,3], [3,4], [3,5] and [4,3] (black cells).

  6. all other actions result in -1 reward.

first, create a gridworld object using the creategridworld function.

gw = creategridworld(5,5)
gw = 
  gridworld with properties:
                gridsize: [5 5]
            currentstate: "[1,1]"
                  states: [25x1 string]
                 actions: [4x1 string]
                       t: [25x25x4 double]
                       r: [25x25x4 double]
          obstaclestates: [0x1 string]
          terminalstates: [0x1 string]
    probabilitytolerance: 8.8818e-16

now, set the initial, terminal and obstacle states.

gw.currentstate = '[2,1]';
gw.terminalstates = '[5,5]';
gw.obstaclestates = ["[3,3]";"[3,4]";"[3,5]";"[4,3]"];

update the state transition matrix for the obstacle states and set the jump rule over the obstacle states.

updatestatetranstionforobstacles(gw)
gw.t(state2idx(gw,"[2,4]"),:,:) = 0;
gw.t(state2idx(gw,"[2,4]"),state2idx(gw,"[4,4]"),:) = 1;

next, define the rewards in the reward transition matrix.

ns = numel(gw.states);
na = numel(gw.actions);
gw.r = -1*ones(ns,ns,na);
gw.r(state2idx(gw,"[2,4]"),state2idx(gw,"[4,4]"),:) = 5;
gw.r(:,state2idx(gw,gw.terminalstates),:) = 10;

now, use rlmdpenv to create a grid world environment using the gridworld object gw.

env = rlmdpenv(gw)
env = 
  rlmdpenv with properties:
       model: [1x1 rl.env.gridworld]
    resetfcn: []

you can visualize the grid world environment using the plot function.

plot(env)

version history

introduced in r2019a

网站地图