train reinforcement learning agent for simple contextual bandit problem -凯发k8网页登录
this example shows how to solve a contextual bandit problem [1] using reinforcement learning by training dqn and q agents. for more information on these agents, see and .
in contextual bandit problems, an agent selects an action given the initial observation (context), it receives a reward, and the episode terminates. hence, the agent action does not affect the next observation.
contextual bandits can be used for various applications such as hyperparameter tuning, recommender systems, medical treatment, and 5g communication.
the following figure show how multi-armed bandits and contextual bandits are special cases of reinforcement learning.
in bandit problems, the environment has no dynamics, so the reward is only influenced by the current action and (for contextual bandits) the current observation (in these problems the observation is also referred to as context).
neither rewards nor observations are influenced by any environment state (or by previous actions or observations), so the environment does not evolve along the time dimension, and there is no sequential decision making. the problem then become one of finding the action that maximizes the current reward (given a context, if present). single-armed bandit problems are just special cases of multi-armed bandit problems in which the action is a scalar instead of a vector.
environment
the contextual bandit environment in this example is defined as follows:
observation (discrete): {1, 2}
the context (initial observation) is sampled randomly.
action (discrete): {1, 2, 3}
reward:
rewards in this environment are stochastic. the probability of each observation and action pair is defined as follows.
note that the agent does not know these distributions.
is-done signal: since this is a contextual bandit problem, each episode has only one step. hence, the is-done signal is always 1.
create environment interface
create the contextual bandit environment using toycontextualbanditenvironment
, located in this example folder.
env = toycontextualbanditenvironment; obsinfo = getobservationinfo(env); actinfo = getactioninfo(env);
fix the random generator seed for reproducibility.
rng(1)
create dqn agent
create a dqn agent with a default network structure using rlagentinitializationoptions
.
agentopts = rldqnagentoptions(... usedoubledqn = false, ... targetsmoothfactor = 1, ... targetupdatefrequency = 4, ... minibatchsize = 64); agentopts.epsilongreedyexploration.epsilondecay = 0.0005; initopts = rlagentinitializationoptions(numhiddenunit = 16); dqnagent = rldqnagent(obsinfo, actinfo, initopts, agentopts);
train agent
to train the agent, first specify the training options. for this example, use the following options:
train for 3000 episodes.
since this is a contextual bandit problem, and each episode has only one step, set
maxstepsperepisode
to1
.
for more information, see rltrainingoptions
.
train the agent using the train
function. to save time while running this example, load a pre-trained agent by setting dotraining
to false
. to train the agent yourself, set dotraining
to be true
.
maxepisodes = 3000; trainopts = rltrainingoptions(... maxepisodes = maxepisodes, ... maxstepsperepisode = 1, ... verbose = false, ... plots = "training-progress",... stoptrainingcriteria = "episodecount",... stoptrainingvalue = maxepisodes); dotraining = false; if dotraining % train the agent trainingstats = train(dqnagent,env,trainopts); else % load the pre-trained agent for the example load("toycontextualbanditdqnagent.mat","dqnagent") end
validate dqn agent
assume that you know the distribution of the rewards, and you can compute the optimal actions. validate the agent's performance by comparing these optimal actions with the actions selected by the agent. first, compute the true expected rewards with the true distributions.
1. the expected reward of each action at s=1 is as follows.
hence, the optimal action is 3 when s=1.
2. the expected reward of each action at s=2 is as follows.
hence, the optimal action is 1 when s=2.
with enough sampling, the q-values should be closer to the true expected reward. visualize the true expected rewards.
expectedrewards = zeros(2,3);
expectedrewards(1,1) = 0.3*5 0.7*2;
expectedrewards(1,2) = 0.1*10 0.9*1;
expectedrewards(1,3) = 3.5;
expectedrewards(2,1) = 0.2*10 0.8*2;
expectedrewards(2,2) = 3.0;
expectedrewards(2,3) = 0.5*5 0.5*0.5;
localplotqvalues(expectedrewards, "expected rewards")
now, validate whether the dqn agent learns the optimal behavior.
if the state is 1, the optimal action is 3.
observation = 1; getaction(dqnagent,observation)
ans = 1x1 cell array
{[3]}
the agent selects the optimal action.
if the state is 2, the optimal action is 1.
observation = 2; getaction(dqnagent,observation)
ans = 1x1 cell array
{[1]}
the agent selects the optimal action. thus, the dqn agent has learned the optimal behavior.
next, compare the q-value function to the true expected reward when selecting the optimal action.
% get critic figure(1) dqncritic = getcritic(dqnagent); qvalues = zeros(2,3); for s = 1:2 qvalues(s,:) = getvalue(dqncritic, {s}); end % visualize q values localplotqvalues(qvalues, "q values")
the learned q-values are close to the true expected rewards computed above.
create q-learning agent
next, train a q-learning agent. to create a q-learning agent, first create a table using the observation and action specifications from the environment.
rng(1); % for reproducibility
qtable = rltable(obsinfo, actinfo);
critic = rlqvaluefunction(qtable, obsinfo, actinfo);
opt = rlqagentoptions;
opt.epsilongreedyexploration.epsilon = 1;
opt.epsilongreedyexploration.epsilondecay = 0.0005;
qagent = rlqagent(critic,opt);
train q-learning agent
to save time while running this example, load a pre-trained agent by setting dotraining
to false
. to train the agent yourself, set dotraining
to true
.
dotraining = false; if dotraining % train the agent. trainingstats = train(qagent,env,trainopts); else % load the pre-trained agent for the example. load("toycontextualbanditqagent.mat","qagent") end
validate q-learning agent
when the state is 1, the optimal action is 3.
observation = 1; getaction(qagent,observation)
ans = 1x1 cell array
{[3]}
the agent selects the optimal action.
when the state is 2, the optimal action is 1.
observation = 2; getaction(qagent,observation)
ans = 1x1 cell array
{[1]}
the agent selects the optimal action. hence, the q-learning agent has learned the optimal behavior.
next, compare the q-value function to the true expected reward when selecting the optimal action.
% get critic figure(2) qcritic = getcritic(qagent); qvalues = zeros(2,3); for s = 1:2 for a = 1:3 qvalues(s,a) = getvalue(qcritic, {s}, {a}); end end % visualize q values localplotqvalues(qvalues, "q values")
again, the learned q-values are close to the true expected rewards. the q-values for deterministic rewards, q(s=1, a=3) and q(s=2, a=2), are the same as the true expected rewards. note that the corresponding q-values learned by the dqn network, while close, are not identical to the true values. this happens because the dqn uses a neural network instead of a table as the internal function approximator.
local function
function localplotqvalues(qvalues, titletext) % visualize q values figure; imagesc(qvalues,[1,4]) colormap("autumn") title(titletext) colorbar set(gca,"xtick",1:3,"xticklabel",{"a=1", "a=2", "a=3"}) set(gca,"ytick",1:2,"yticklabel",{"s=1", "s=2"}) % plot values on the image x = repmat(1:size(qvalues,2), size(qvalues,1), 1); y = repmat(1:size(qvalues,1), size(qvalues,2), 1)'; qvaluesstr = num2cell(qvalues); qvaluesstr = cellfun(@num2str, qvaluesstr, uniformoutput=false); text(x(:), y(:), qvaluesstr, horizontalalignment = "center") end
reference
[1] sutton, richard s., and andrew g. barto. reinforcement learning: an introduction. second edition. adaptive computation and machine learning series. cambridge, massachusetts: the mit press, 2018.