main content

stochastic gaussian reward function approximator object for neural network-凯发k8网页登录

stochastic gaussian reward function approximator object for neural network-based environment

since r2022a

description

when creating a neural network-based environment using rlneuralnetworkenvironment, you can specify the reward function approximator using an rlcontinuousdeterministicrewardfunction object. do so when you do not know a ground-truth reward signal for your environment and you expect the reward signal to be stochastic.

the reward function object uses a deep neural network as internal approximation model to predict the reward signal for the environment given one of the following input combinations.

  • observations, actions, and next observations

  • observations and actions

  • actions and next observations

  • next observations

to specify a deterministic reward function approximator, use an rlcontinuousdeterministicrewardfunction object.

creation

description

example

rwdfcnappx = rlcontinuousgaussianrewardfunction(net,observationinfo,actioninfo,name=value) creates a stochastic reward function using the deep neural network net and sets the observationinfo and actioninfo properties.

when creating a reward function you must specify the names of the deep neural network inputs using one of the following combinations of name-value pair arguments.

you must also specify the names of the deep neural network outputs using the rewardmeanoutputname and rewardstandarddeviationoutputname name-value pair arguments.

you can also specify the usedevice property using an optional name-value pair argument. for example, to use a gpu for prediction, specify usedevice="gpu".

input arguments

deep neural network with a scalar output value, specified as a dlnetwork object.

the input layer names for this network must match the input names specified using the observationinputnames, actioninputnames, and nextobservationinputnames. the dimensions of the input layers must match the dimensions of the corresponding observation and action specifications in observationinfo and actioninfo, respectively.

name-value arguments

specify optional pairs of arguments as name1=value1,...,namen=valuen, where name is the argument name and value is the corresponding value. name-value arguments must appear after other arguments, but the order of the pairs does not matter.

example: observationinputnames="velocity"

observation input layer names, specified as a string or string array. specify observationinputnames when you expect the reward signal to depend on the current environment observation.

the number of observation input names must match the length of observationinfo and the order of the names must match the order of the specifications in observationinfo.

action input layer names, specified as a string or string array. specify actioninputnames when you expect the reward signal to depend on the current action value.

the number of action input names must match the length of actioninfo and the order of the names must match the order of the specifications in actioninfo.

next observation input layer names, specified as a string or string array. specify nextobservationinputnames when you expect the reward signal to depend on the next environment observation.

the number of next observation input names must match the length of observationinfo and the order of the names must match the order of the specifications in observationinfo.

reward mean output layer name, specified as a string.

reward standard deviation output layer name, specified as a string.

properties

this property is read-only.

observation specifications, specified as an rlnumericspec object or an array of such objects. each element in the array defines the properties of an environment observation channel, such as its dimensions, data type, and name.

you can extract the observation specifications from an existing environment or agent using getobservationinfo. you can also construct the specifications manually using rlnumericspec.

this property is read-only.

action specifications, specified as an rlfinitesetspec or rlnumericspec object. this object defines the properties of the environment action channel, such as its dimensions, data type, and name.

note

only one action channel is allowed.

you can extract the action specifications from an existing environment or agent using getactioninfo. you can also construct the specification manually using rlfinitesetspec or rlnumericspec.

computation device used to perform operations such as gradient computation, parameter updates, and prediction during training and simulation, specified as either "cpu" or "gpu".

the "gpu" option requires both parallel computing toolbox™ software and a cuda®-enabled nvidia® gpu. for more information on supported gpus see gpu computing requirements (parallel computing toolbox).

you can use gpudevice (parallel computing toolbox) to query or select a local gpu device to be used with matlab®.

note

training or simulating a network on a gpu involves device-specific numerical round-off errors. these errors can produce different results compared to performing the same operations using a cpu.

object functions

rlneuralnetworkenvironmentenvironment model with deep neural network transition models

examples

create an environment interface and extract observation and action specifications. alternatively, you can create specifications using rlnumericspec and rlfinitesetspec.

env = rlpredefinedenv("cartpole-continuous");
obsinfo = getobservationinfo(env);
actinfo = getactioninfo(env);

create a deep neural network. the network has two input channels, one for the current action and one for the next observations. the single output channel is for the predicted reward value.

statepath = featureinputlayer(obsinfo.dimension(1),name="obs");
actionpath = featureinputlayer(actinfo.dimension(1),name="action");
nextstatepath = featureinputlayer(obsinfo.dimension(1),name="nextobs");
commonpath = [concatenationlayer(1,3,name="concat")
    fullyconnectedlayer(32,name="fc")
    relulayer(name="relu1")
    fullyconnectedlayer(32,name="fc2")];
meanpath = [relulayer(name="rewardmeanrelu")
    fullyconnectedlayer(1,name="rewardmean")];
stdpath = [relulayer(name="rewardstdrelu")
    fullyconnectedlayer(1,name="rewardstdfc")
    softpluslayer(name="rewardstd")];
rwdnet = layergraph(statepath);
rwdnet = addlayers(rwdnet,actionpath);
rwdnet = addlayers(rwdnet,nextstatepath);
rwdnet = addlayers(rwdnet,commonpath);
rwdnet = addlayers(rwdnet,meanpath);
rwdnet = addlayers(rwdnet,stdpath);
rwdnet = connectlayers(rwdnet,"nextobs","concat/in1");
rwdnet = connectlayers(rwdnet,"action","concat/in2");
rwdnet = connectlayers(rwdnet,"obs","concat/in3");
rwdnet = connectlayers(rwdnet,"fc2","rewardmeanrelu");
rwdnet = connectlayers(rwdnet,"fc2","rewardstdrelu");
plot(rwdnet)

figure contains an axes object. the axes object contains an object of type graphplot.

create a dlnetwork object.

rwdnet = dlnetwork(rwdnet);

create a stochastic reward function object.

rwdfncappx = rlcontinuousgaussianrewardfunction(...
    rwdnet,obsinfo,actinfo,...
    observationinputnames="obs",...
    actioninputnames="action", ...
    nextobservationinputnames="nextobs", ...
    rewardmeanoutputnames="rewardmean", ...
    rewardstandarddeviationoutputnames="rewardstd");

using this reward function object, you can predict the next reward value based on the current action and next observation. for example, predict the reward for a random action and next observation. the reward value is sampled from a gaussian distribution with the mean and standard deviation output by the reward network.

obs = rand(obsinfo.dimension);
act = rand(actinfo.dimension);
nextobs = rand(obsinfo.dimension(1),1);
predrwd = predict(rwdfncappx,{obs},{act},{nextobs})
predrwd = single
    -0.1308

you can obtain the mean value and standard deviation of the gaussian distribution for the predicted reward using evaluate.

predrwddist = evaluate(rwdfncappx,{obs,act,nextobs})
predrwddist=1×2 cell array
    {[-0.0995]}    {[0.6195]}

version history

introduced in r2022a

网站地图