create custom grid world environments -凯发k8网页登录

main content

create custom grid world environments

a grid world is a two-dimensional, cell-based environment where the agent starts from one cell and moves toward the terminal cell while collecting as much reward as possible. grid world environments are useful for applying reinforcement learning algorithms to discover optimal paths and policies for agents on the grid to arrive at the terminal goal in the fewest moves.

basic five-by-five grid world with agent (inicated by a red circle) positioned on the top left corner, terminal location (indicated by a light blue square) in the bottom right corner, and four obstacle squares, in black, in the middle.

reinforcement learning toolbox™ lets you create custom matlab® grid world environments for your own applications. to create a custom grid world environment:

  1. create the grid world model.

  2. configure the grid world model.

  3. use the grid world model to create your own grid world environment.

grid world models

you can create your own grid world model using the creategridworld function. specify the grid size when creating the gridworld object.

the gridworld object has the following properties.

propertyread-onlydescription
gridsizeyes

dimensions of the grid world, displayed as an m-by-n array. here, m represents the number of grid rows and n is the number of grid columns.

currentstateno

name of the current state of the agent, specified as a string. you can use this property to set the initial state of the agent. the agent always starts from cell [1,1] by default.

the agent starts from the currentstate once you use the reset function in the rlmdpenv environment object.

statesyes

a string vector containing the state names of the grid world. for instance, for a 2-by-2 grid world model gw, specify the following:

gw.states = ["[1,1]";
             "[2,1]";
             "[1,2]";
             "[2,2]"];
actionsyes

a string vector containing the list of possible actions that the agent can use. you can set the actions when you create the grid world model by using the moves argument:

gw = creategridworld(m,n,moves)

specify moves as either 'standard' or 'kings'.

movesgw.actions
'standard'['n';'s';'e';'w']
'kings'['n';'s';'e';'w';'ne';'nw';'se';'sw']
tno

state transition matrix, specified as a 3-d array. t is a probability matrix that indicates the likelihood of the agent moving from the current state s to any possible next state s' by performing action a.

t can be denoted as

t(s,s',a) = probability(s'|s,a).

for instance, consider a 5-by-5 deterministic grid world object gw with the agent in cell [3,1]. view the state transition matrix for the north direction.

northstatetransition = gw.t(:,:,1)

basic five-by-five grid world showing the agent position that moves north.

from the above figure, the value of northstatetransition(3,2) is 1 since the agent moves from cell [3,1] to cell [2,1] with action 'n'. a probability of 1 indicates that from a given state, if the agent goes north, it has a 100% chance of moving one cell north on the grid. for an example showing how to set up the state transition matrix, see train reinforcement learning agent in basic grid world.

rno

reward transition matrix, specified as a 3-d array. r determines how much reward the agent receives after performing an action in the environment. r has the same shape and size as the state transition matrix t.

the reward transition matrix r can be denoted as

r = r(s,s',a).

set up r such that there is a reward to the agent after every action. for instance, you can set up a positive reward if the agent transitions over obstacle states and when it reaches the terminal state. you can also set up a default reward of -11 for all actions the agent takes, independent of the current state and next state. for an example that show how to set up the reward transition matrix, see train reinforcement learning agent in basic grid world.

obstaclestatesno

obstaclestates are states that cannot be reached in the grid world, specified as a string vector. consider the following 5-by-5 grid world model gw.

basic five-by-five grid world with an arrow pointing to the terminal state location (indicated by a light blue square) in the bottom right corner.

the black cells are obstacle states, and you can specify them using the following syntax:

gw.obstaclestates = ["[3,3]";"[3,4]";"[3,5]";"[4,3]"];

for a workflow example, see train reinforcement learning agent in basic grid world.

terminalstatesno

terminalstates are the final states in the grid world, specified as a string vector. consider the previous 5-by-5 grid world model gw. the blue cell is the terminal state and you can specify it by:

gw.terminalstates = "[5,5]";

for a workflow example, see train reinforcement learning agent in basic grid world.

grid world environments

you must create a markov decision process (mdp) environment using rlmdpenv from the grid world model from the previous step. mdp is a discrete-time stochastic control process. it provides a mathematical framework for modeling decision making in situations where outcomes are partly random and partly under the control of the decision maker. the agent uses the grid world environment object rlmdpenv to interact with the grid world model object gridworld.

for more information, see rlmdpenv and train reinforcement learning agent in basic grid world.

see also

functions

objects

related topics

网站地图