evaluate function approximator object given observation (or observation-凯发k8网页登录
evaluate function approximator object given observation (or observation-action) input data
since r2022a
description
examples
evaluate a function approximator object
this example shows you how to evaluate a function approximator object (that is, an actor or a critic). for this example, the function approximator object is a discrete categorical actor and you evaluate it given some observation data, obtaining in return the action probability distribution and the updated network state.
load the same environment used in , and obtain the observation and action specifications.
env = rlpredefinedenv("cartpole-discrete");
obsinfo = getobservationinfo(env)
obsinfo = rlnumericspec with properties: lowerlimit: -inf upperlimit: inf name: "cartpole states" description: "x, dx, theta, dtheta" dimension: [4 1] datatype: "double"
actinfo = getactioninfo(env)
actinfo = rlfinitesetspec with properties: elements: [-10 10] name: "cartpole action" description: [0x0 string] dimension: [1 1] datatype: "double"
to approximate the policy within the actor, use a recurrent deep neural network. define the network as an array of layer objects. get the dimensions of the observation space and the number of possible actions directly from the environment specification objects.
net = [
sequenceinputlayer(prod(obsinfo.dimension))
fullyconnectedlayer(8)
relulayer
lstmlayer(8,outputmode="sequence")
fullyconnectedlayer(numel(actinfo.elements)) ];
convert the network to a dlnetwork
object and display the number of weights.
net = dlnetwork(net); summary(net)
initialized: true number of learnables: 602 inputs: 1 'sequenceinput' sequence input with 4 dimensions
create a stochastic actor representation for the network.
actor = rldiscretecategoricalactor(net,obsinfo,actinfo);
use evaluate
to return the probability of each of the two possible actions. note that the type of the returned numbers is single
, not double
.
[prob,state] = evaluate(actor,{rand(obsinfo.dimension)}); prob{1}
ans = 2x1 single column vector
0.4847
0.5153
since a recurrent neural network is used for the actor, the second output argument, representing the updated state of the neural network, is not empty. in this case, it contains the updated (cell and hidden) states for the eight units of the lstm
layer used in the network.
state{:}
ans = 8x1 single column vector
-0.0833
0.0619
-0.0066
-0.0651
0.0714
-0.0957
0.0614
-0.0326
ans = 8x1 single column vector
-0.1367
0.1142
-0.0158
-0.1820
0.1305
-0.1779
0.0947
-0.0833
you can use getstate
and setstate
to extract and set the current state of the actor.
getstate(actor)
ans=2×1 cell array
{8x1 single}
{8x1 single}
actor = setstate(actor, ... {-0.01*single(rand(8,1)), ... 0.01*single(rand(8,1))});
you can obtain action probabilities and updated states for a batch of observations. for example, use a batch of five independent observations.
obsbatch = reshape(1:20,4,1,5,1); [prob,state] = evaluate(actor,{obsbatch})
prob = 1x1 cell array
{2x5 single}
state=2×1 cell array
{8x5 single}
{8x5 single}
the output arguments contain action probabilities and updated states for each observation in the batch.
note that the actor treats observation data along the batch length dimension independently, not sequentially.
prob{1}
ans = 2x5 single matrix
0.5187 0.5869 0.6048 0.6124 0.6155
0.4813 0.4131 0.3952 0.3876 0.3845
prob = evaluate(actor,{obsbatch(:,:,[5 4 3 1 2])}); prob{1}
ans = 2x5 single matrix
0.6155 0.6124 0.6048 0.5187 0.5869
0.3845 0.3876 0.3952 0.4813 0.4131
to evaluate the actor using sequential observations, use the sequence length (time) dimension. for example, obtain action probabilities for five independent sequences, each one made of nine sequential observations.
[prob,state] = evaluate(actor, ...
{rand([obsinfo.dimension 5 9])})
prob = 1x1 cell array
{2x5x9 single}
state=2×1 cell array
{8x5 single}
{8x5 single}
the first output argument contains a vector of two probabilities (first dimension) for each element of the observation batch (second dimension) and for each time element of the sequence length (third dimension).
the second output argument contains two vectors of final states for each observation batch (that is, the network maintains a separate state history for each observation batch).
display the probability of the second action, after the seventh sequential observation in the fourth independent batch.
prob{1}(2,4,7)
ans = single
0.5675
for more information on input and output format for recurrent neural networks, see the algorithms section of .
input arguments
fcnappx
— function approximator object
function approximator object
function approximator object, specified as one of the following:
object — value function critic
rlqvaluefunction
object — q-value function criticobject — multi-output q-value function critic with a discrete action space
rlcontinuousdeterministicactor
object — deterministic policy actor with a continuous action space— stochastic policy actor with a discrete action space
object — stochastic policy actor with a continuous action space
rlcontinuousdeterministictransitionfunction
object — continuous deterministic transition function for a model based agentrlcontinuousgaussiantransitionfunction
object — continuous gaussian transition function for a model based agentrlcontinuousdeterministicrewardfunction
object — continuous deterministic reward function for a model based agentrlcontinuousgaussianrewardfunction
object — continuous gaussian reward function for a model based agentrlisdonefunction
object — is-done function for a model based agent
indata
— input data for function approximator
cell array
input data for the function approximator, specified as a cell array with as many
elements as the number of input channels of fcnappx
. in the
following section, the number of observation channels is indicated by
no.
if
fcnappx
is anrlqvaluefunction
, anrlcontinuousdeterministictransitionfunction
or anrlcontinuousgaussiantransitionfunction
object, then each of the first no elements ofindata
must be a matrix representing the current observation from the corresponding observation channel. they must be followed by a final matrix representing the action.if
fcnappx
is a function approximator object representing an actor or critic (but not anrlqvaluefunction
object),indata
must contain no elements, each one a matrix representing the current observation from the corresponding observation channel.if
fcnappx
is anrlcontinuousdeterministicrewardfunction
, anrlcontinuousgaussianrewardfunction
, or anrlisdonefunction
object, then each of the first no elements ofindata
must be a matrix representing the current observation from the corresponding observation channel. they must be followed by a matrix representing the action, and finally by no elements, each one being a matrix representing the next observation from the corresponding observation channel.
each element of indata
must be a matrix of dimension
mc-by-lb-by-ls,
where:
mc corresponds to the dimensions of the associated input channel.
lb is the batch size. to specify a single observation, set lb = 1. to specify a batch of (independent) inputs, specify lb > 1. if
indata
has multiple elements, then lb must be the same for all elements ofindata
.ls specifies the sequence length (length of the sequence of inputs along the time dimension) for recurrent neural network. if
fcnappx
does not use a recurrent neural network (which is the case for environment function approximators, as they do not support recurrent neural networks), then ls = 1. ifindata
has multiple elements, then ls must be the same for all elements ofindata
.
for more information on input and output formats for recurrent neural networks, see the algorithms section of .
example: {rand(8,3,64,1),rand(4,1,64,1),rand(2,1,64,1)}
output arguments
outdata
— output data from evaluation of function approximator object
cell array
output data from the evaluation of the function approximator object, returned as a
cell array. the size and contents of outdata
depend on the type of
object you use for fcnappx
, and are shown in the following list.
here, no is the number of observation
channels.
rlcontinuousdeterministictransitionfunction
- no matrices, each one representing the predicted observation from the corresponding observation channel.rlcontinuousgaussiantransitionfunction
- no matrices representing the mean value of the predicted observation for the corresponding observation channel, followed by no matrices representing the standard deviation of the predicted observation for the corresponding observation channel.rlcontinuousgaussianactor
- two matrices representing the mean value and standard deviation of the action, respectively.rldiscretecategoricalactor
- a matrix with the probabilities of each action.rlcontinuousdeterministicactor
a matrix with the action.rlvectorqvaluefunction
- a matrix with the values of each possible action.rlqvaluefunction
- a matrix with the value of the action.rlvaluefunction
- a matrix with the value of the current observation.rlcontinuousdeterministicrewardfunction
- a matrix with the predicted reward as a function of current observation, action, and next observation following the action.rlcontinuousgaussianrewardfunction
- two matrices representing the mean value and standard deviation, respectively, of the predicted reward as a function of current observation, action, and next observation following the action.rlisdonefunction
- a vector with the probabilities of the predicted termination status. termination probabilities range from0
(no termination predicted) or1
(termination predicted), and depend (in the most general case) on the values of observation, action, and next observation following the action.
each element of outdata
is a matrix of dimensions
d-by-lb-by-ls,
where:
d is the vector of dimensions of the corresponding output channel of
fcnappx
. depending on the type of approximator function, this channel can carry a predicted observation (or its mean value or standard deviation), an action (or its mean value or standard deviation), the value (or values) of an observation (or observation-action couple), a predicted reward, or a predicted termination status.lb is the batch size (length of a batch of independent inputs).
ls is the sequence length (length of the sequence of inputs along the time dimension) for a recurrent neural network. if
fcnappx
does not use a recurrent neural network (which is the case for environment function approximators, as they do not support recurrent neural networks), then ls = 1.
note
if fcnappx
is a critic, then evaluate
behaves identically to
except that it returns results inside a single-cell array. if
fcnappx
is an rlcontinuousdeterministicactor
actor, then evaluate
behaves identically to . if
fcnappx
is a stochastic actor such as an or , then evaluate
returns the
action probability distribution, while
returns a sample action. specifically, for an actor object, evaluate
returns the probability of each possible action. for an actor object, evaluate
returns the mean and standard deviation of the gaussian distribution. for these kinds
of actors, see also the note in
regarding the enforcement of constraints set by the action specification.
note
if fcnappx
is an rlcontinuousdeterministicrewardfunction
object, then
evaluate
behaves identically to predict
except that it returns results inside a single-cell array. if
fcnappx
is an rlcontinuousdeterministictransitionfunction
object, then
evaluate
behaves identically to predict
. if
fcnappx
is an rlcontinuousgaussiantransitionfunction
object, then
evaluate
returns the mean value and standard deviation the
observation probability distribution, while predict
returns an observation sampled from this distribution. similarly, for an rlcontinuousgaussianrewardfunction
object, evaluate
returns the mean value and standard deviation the reward probability distribution,
while predict
returns a reward sampled from this distribution. finally, if
fcnappx
is an rlisdonefunction
object, then evaluate
returns the
probabilities of the termination status being false or true, respectively, while
predict
returns a predicted termination status sampled with
these probabilities.
state
— updated state of function approximator object
cell array
next state of the function approximator object, returned as a cell array. if
fcnappx
does not use a recurrent neural network (which is the
case for environment function approximators), then state
is an
empty cell array.
you can set the state of the approximator to state
using the
setstate
function. for example:
critic = setstate(critic,state);
version history
introduced in r2022a
打开示例
您曾对此示例进行过修改。是否要打开带有您的编辑的示例?
matlab 命令
您点击的链接对应于以下 matlab 命令:
请在 matlab 命令行窗口中直接输入以执行命令。web 浏览器不支持 matlab 命令。
select a web site
choose a web site to get translated content where available and see local events and offers. based on your location, we recommend that you select: .
you can also select a web site from the following list:
how to get best site performance
select the china site (in chinese or english) for best site performance. other mathworks country sites are not optimized for visits from your location.
americas
- (español)
- (english)
- (english)
europe
- (english)
- (english)
- (deutsch)
- (español)
- (english)
- (français)
- (english)
- (italiano)
- (english)
- (english)
- (english)
- (deutsch)
- (english)
- (english)
- switzerland
- (english)
asia pacific
- (english)
- (english)
- (english)
- 中国
- (日本語)
- (한국어)