create custom turn-凯发k8网页登录
create custom turn-based multiagent reinforcement learning environment
since r2023b
description
use rlturnbasedfunctionenv
to create a custom turn-based
multiagent reinforcement learning environment in which agents execute in turns. to create your
custom environment, you supply the observation and action specifications as well as your own
reset and step matlab® functions. to verify the operation of your environment,
rlturnbasedfunctionenv
automatically calls validateenvironment
after creating the
environment.
creation
description
creates a turn-based multiagent environment using observation and action specifications
and custom step and reset functions. the cell arrays env
= rlturnbasedfunctionenv(observationinfo
,actioninfo
,stepfcn
,resetfcn
)observationinfo
and actioninfo
must contain the observation and action
specifications, respectively, for each agent. the stepfcn
and
resetfcn
arguments are the names of your step and reset matlab functions, respectively, and they are used to set the stepfcn
and resetfcn
properties of env
.
input arguments
observationinfo
— observation specifications
cell array
observation specifications, specified as a cell array with as many elements as the
number of agents. every element of the cell must contain the observation
specifications for a corresponding agent. the observation specification for an agent
must be an rlfinitesetspec
or rlnumericspec
object or a vector containing a mix of such objects (in which case every element of
the vector defines the properties of a specific observation channel for the
agent).
actioninfo
— action specifications
cell array
action specifications, specified as a cell array with as many elements as the
number of agents. every element of the cell must contain the observation
specifications for a corresponding agent. the action specification for an agent must
be an rlfinitesetspec
(for discrete action spaces) or rlnumericspec
(for continuous action spaces) object. this object defines the properties of the
action channel for the agent.
note
only one action channel per agent is allowed.
properties
stepfcn
— environment step function
function name | function handle | anonymous function handle
environment step function, specified as a function name, function handle, or handle
to an anonymous function. the sim
and
train
functions
call stepfcn
to update the environment at every simulation or
training step.
this function must have two inputs and four outputs, as illustrated by the following signature.
[nextobservation,reward,isdone,updatedinfo] = mystepfunction(action,info)
for a given action input, the step function returns the values of the next observation and reward, a logical value indicating whether the episode is terminated, and an updated environment information variable.
specifically, the required input and output arguments are:
action
— cell array containing the current actions from the agents that are currently executing. must contain as many elements as the number of agents that are executing at the current step. each element of theaction
cell must match the dimensions and data type specified in the corresponding element of theactioninfo(activeagentindex)
cell.info
andupdatedinfo
— this must be a structure containing the fieldactiveagentindex
, which is a scalar or vector of indices indicating the agents that are active in the current step. the environment step function can modify the value to control the execution of agents in the next step. other optional fields can contain the environment state and parameters or any data that you want to pass from one step to the next.the simulation or training functions (
train
orsim
) handle this variable by:initializing
info
using the second output argument returned byresetfcn
, at the beginning of the episodepassing
info
as second input argument tostepfcn
at each training or simulation stepupdating
info
using the fourth output argument returned bystepfcn
,updatedinfo
nextobservation
— cell array containing the next observations for all the agents. these are the observations related to the next state (the transition to the next state is caused by the current actions contained inaction
). therefore,nextobservation
must contain as many elements as the number of agents and each element must match the dimensions and data types specified in the corresponding element of theobservationinfo
cell.reward
— vector containing the rewards for all the agents. these are the rewards generated by the transition from the current state to the next one. each element of the vector must be a numeric scalar.isdone
— logical value indicating whether to end the simulation or training episode.
to use additional input arguments beyond the allowed two, define your additional
arguments in the matlab workspace, then specify stepfcn
as an anonymous
function that in turn calls your custom function with the additional arguments defined
in the workspace, as shown in the example create custom environment using step and reset functions.
example: stepfcn="mystepfcn"
resetfcn
— environment reset function
function name | function handle | anonymous function handle
environment reset function, specified as a function name, function handle, or handle
to an anonymous function. the sim
function
calls your reset function to reset the environment at the start of each simulation, and
the train
function
calls it at the start of each training episode.
the reset function that you provide must have no inputs and two outputs, as illustrated by the following signature.
[initialobservation,info] = myresetfunction()
the reset function sets the environment to an initial state and computes the initial
value of the observation. for example, you can create a reset function that randomizes
certain state values such that each training episode begins from different initial
conditions. the initialobservation
must be a cell array containing
the initial observations for all the agents. therefore,
initialobservation
must contain as many elements as the number of
agents and each element must match the dimensions and data types specified in the
corresponding element of the observationinfo
cell.
the info
output of resetfcn
initializes the
info
property of your environment and contains any data that you
want to pass from one step to the next. this can be the environment state or a structure
containing state and parameters. the simulation or training function
(train
or sim
) supplies the current value of
info
as the second input argument of stepfcn
,
then uses the fourth output argument returned by stepfcn
to update
the value of info
.
to use additional input arguments beyond the allowed two, define your argument in
the matlab workspace, then specify stepfcn
as an anonymous
function that in turn calls your custom function with the additional arguments defined
in the workspace, as shown in the example create custom environment using step and reset functions.
example: resetfcn="myresetfcn"
info
— information to pass to next step
structure
information to pass to next step, specified as a structure containing the field
activeagentindex
, which is a scalar or vector of indexes indicating
the agents that are active in the current step. the environment step function can modify
this field to control the execution of agents in the next step. other optional fields
can contain the environment state and parameters or any data that you want to pass from
one step to the next.
when resetfcn
is called, whatever you define as the
info
output of resetfcn
initializes this
property. when a step occurs the simulation or training function
(train
or sim
) uses the current value of
info
as the second input argument for stepfcn
.
once stepfcn
completes, the simulation or training function then
updates the current value of info
using the fourth output argument
returned by stepfcn
.
example: info.activeagentindex=[2 3]
object functions
getactioninfo | obtain action data specifications from reinforcement learning environment, agent, or experience buffer |
getobservationinfo | obtain observation data specifications from reinforcement learning environment, agent, or experience buffer |
train | train reinforcement learning agents within a specified environment |
sim | simulate trained reinforcement learning agents within specified environment |
validateenvironment | validate custom reinforcement learning environment |
examples
create custom turn-based multiagent function environment
create a custom turn-based multiagent environment by supplying custom matlab® functions. using rlturnbasedfunctionenv
, you can create a custom matlab reinforcement learning environment in which agents execute in turns. to create your custom turn-based environment, you must define observation specifications, action specifications, and step and reset functions.
for this example, consider an environment containing four agents, all of them having a continuous observation space, and receiving observation vectors of four, two, five, and three elements respectively.
define the agent observation spaces using a cell array.
obsinfo = { rlnumericspec([4 1]), ... rlnumericspec([2 1]), ... rlnumericspec([5 1]), ... rlnumericspec([3 1]) };
for this example, the first and fourth agents have a finite action set containing two and four elements, respectively, while the second and the third have continuous action spaces consisting of a scalar and a two-dimensional vector. define the agent action sets and spaces using a cell array.
actinfo = { rlfinitesetspec([1 2]), ... rlnumericspec([1 1]), ... rlnumericspec([2 1]), ... rlfinitesetspec([1 2 3 4]) };
next, specify your step and reset functions. for this example, use the functions resetfcn
and stepfcn
defined at the end of the example.
note that while the custom reset and step functions that you must pass to rlturnbasedfuntionenv
must have exactly zero and two arguments, respectively, you can avoid this limitation by using anonymous functions. for an example on how to do this, see create custom environment using step and reset functions.
to create the custom turn-based multiagent function environment, use rlturnbasedfunctionenv
.
env = rlturnbasedfunctionenv( ... obsinfo,actinfo, ... @stepfcn,@resetfcn)
env = rlturnbasedfunctionenv with properties: stepfcn: @stepfcn resetfcn: @resetfcn info: [1x1 struct]
you can now create agents for env
and train or simulate them as you would for any other environment.
environment functions
environment reset function.
function [initialobs, info] = resetfcn() % resetfun sets the default state of the environment. % % - initialobs is a 1xn cell array. % - info is a structure with the field: % - activeagentindex: a scalar or vector of agent indices that % are active in the current step. the value can be modified to % control the execution of agents in the next step. % - other fields of any matlab data type can be used to pass % information from one step to the next. % for this example, initialize the agent observations randomly initialobs = { rand([4 1]), ... rand([2 1]), ... rand([5 1]), ... rand([3 1]) }; % initialize the info structure info.environmentstate = initialobs; info.executionorder = {1, [2,3], 4}; info.turncount = 1; info.activeagentindex = 1; end
environment step function.
function [nextobs, reward, isdone, info] = stepfcn(action, info) % stepfun specifies how the environment advances to the next state % given the actions from all the agents. % % - nextobs is a 1xn cell array (n is the total number of agents). % - action is a 1xp cell array (p is the number of active agents). % - reward is a 1xn numeric array. % - isdone is a logical or numeric scalar. % - info is a structure with the field: % - activeagentindex: a scalar or vector of agent indices that % are active in the current step. the value can be modified to % control the execution of agents in the next step. % - other fields of any matlab data type can be used to pass % information from one step to the next. % for this example, just return to each agent a random observation nextobs = { rand([4 1]), ... rand([2 1]), ... rand([5 1]), ... rand([3 1]) }; % return a random reward vector multiplied by the norm of the action % of the first or the current executing agents reward = rand(4,1)*norm(action{1}); % return a false is-done value. isdone = false; % extract the execution order and turn count ord = info.executionorder; tc = info.turncount; % reset turn count to zero when it reaches 3 if mod(tc, numel(ord)) == 0 tc = 0; end % set activeagentindex and turncount fields info.activeagentindex = ord{tc 1}; info.turncount = tc 1; % set the environmentstate field info.environmentstate = nextobs; end
version history
introduced in r2023b
see also
functions
rlpredefinedenv
|rlcreateenvtemplate
|validateenvironment
|rlsimulinkenv
|getobservationinfo
|getactioninfo
objects
打开示例
您曾对此示例进行过修改。是否要打开带有您的编辑的示例?
matlab 命令
您点击的链接对应于以下 matlab 命令:
请在 matlab 命令行窗口中直接输入以执行命令。web 浏览器不支持 matlab 命令。
select a web site
choose a web site to get translated content where available and see local events and offers. based on your location, we recommend that you select: .
you can also select a web site from the following list:
how to get best site performance
select the china site (in chinese or english) for best site performance. other mathworks country sites are not optimized for visits from your location.
americas
- (español)
- (english)
- (english)
europe
- (english)
- (english)
- (deutsch)
- (español)
- (english)
- (français)
- (english)
- (italiano)
- (english)
- (english)
- (english)
- (deutsch)
- (english)
- (english)
- switzerland
- (english)
asia pacific
- (english)
- (english)
- (english)
- 中国
- (日本語)
- (한국어)