policies and learning algorithms | reinforcement learning, part 3 -凯发k8网页登录
from the series: reinforcement learning
brian douglas
this video provides an introduction to the algorithms that reside within the agent. we’ll cover why we use neural networks to represent functions and why you may have to set up two neural networks in a powerful family of methods called actor-critic.
in the last video, we focused mainly on setting up the environment: what the environment includes and how rewards from the environment are used to guide the agent’s behavior. in this video, we’re going to focus on the setting up the agent—specifically, the last three steps in the rl workflow.
now, to fully cover these steps is easily several semesters’ worth of material. luckily, that’s not the goal of this 17-minute video. instead, i want to introduce a few topics at a high level that will give you a general understanding of what’s going on so that it makes more sense when you dive in with other, more complete sources. so with that being said, in this video, i will address these two main questions:
one, why use neural networks to represent functions as opposed to something else, like tables or transfer functions? and two, why you may have to set up two neural networks and how they complement each other in a powerful family of methods called actor-critic. so let’s get to it. i’m brian, and welcome to a matlab tech talk.
last time, we ended by explaining how a policy is a function that takes in state observations and outputs actions. and then i briefly introduced the idea of why tables and specifically defined functions aren’t a great solution in many cases. this is because tables are impractical when the state and action space get really large and also because it’s difficult to craft the right function structure for complex environments. using neural networks, however, can address both of these problems.
a neural network is a group of nodes, or artificial neurons, that are connected in a way that allows them to be a universal function approximator. this means that given the right combination of nodes and connections, we can set up the network to mimic any input and output relationship. this is good for us because we can use, say, the hundreds of pixel values in a robotic vision system as the input into this function, and the output could be the actuator commands that drive the robot’s arms and legs. even though the function might be extremely complex, we know that there is a neural network of some kind that can achieve it.
if you’re not familiar with the mathematics of a neural network, i highly recommend the four-part series by 3 blue 1 brown on the topic. he provides a fantastic visualization of what’s going on within the network, so i’ll skip most of that discussion here. i do, however, want to highlight just a few things.
on the left are the input nodes, one for each input to the function, and on the right are the output nodes. in between are columns of nodes called hidden layers. this network has 2 inputs, 2 outputs, and 2 hidden layers of 3 nodes each. with a fully connected network, there is an arrow, or a weighted connection from each input node to each node in the next layer, and then from those nodes to the layer after that and again until the output nodes. the value of a node is equal to the sum of every input node times its respective weighting factor plus a bias. we can perform this calculation for every node in a layer and write it out in a compact matrix form as a system of linear equations.
now if we calculated the values of the nodes simply like this and then fed them as inputs into the next layer to perform the same type of linear operations and again to the output layer, you’ll probably have a concern. how in the world could a bunch of linear equations act as a universal function approximator? specifically, how could it represent a nonlinear function? well, that’s because i left out a step, which is possibly one of the most important aspects of an artificial neural network. after the value of a node has been calculated, an activation function is applied that changes the value of the node in some way. two popular activation functions are the sigmoid, which squishes the node value down to between 0 and 1, and the relu function, which basically zeroes out any negative node values. there are a number of different activation functions but what they all have in common is that they are nonlinear, which is critical to making a network that can approximate any function.
now, as to why this is the case. i really like the explanations from brendon fortuner and michael neilson, who show how it can be demonstrated with relu and sigmoid activations, respectively. i’ve linked to their blogs in the description.
okay, so now let’s remind us where we are. we want to find a function that can take in a large number of observations and transform them into a set of actions that will control some nonlinear environment. and since the structure of this function is often too complex for us to solve for directly, we want to approximate it with a neural network that learns the function over time. and it’s tempting to think that we can just plunk any network in and let loose a reinforcement learning algorithm to find the right combination of weights and biases and we’re done. unfortunately, as usual, that’s not the case.
we have to make a few choices about our neural network ahead of time in order to make sure it's complex enough to approximate the function we’re looking for, but not too complex as to make training impossible or impossibly slow. for example, as we’ve already seen, we need to choose an activation function, and the number of hidden layers, and the number of neurons in each layer. but beyond that, we also have control over the internal structure of the network. should it be fully connected like the network i’ve drawn, or should the connections skip layers like in a residual neural network? should they loop back on themselves to create internal memory with recurrent neural networks? should groups of neurons work together like with a convolutional neural network? and so on.
we have a lot of choices, but as with other control techniques there isn’t one right approach. a lot of times, it comes down to starting with a network structure that has already worked for the type of problem you’re trying to solve and tweak it from there.
now i keep saying that we use these neural nets to represent a policy in the agent, but as for exactly what that means, we need to look at a high-level description of a few different classes of reinforcement learning algorithms: policy function based, value function based, and actor-critic. the following will definitely be an oversimplification, but if you’re just trying to get a basic understanding of the ways rl can be approached, i think this will help get you started.
at a high level, i think policy function-based leaning algorithms make a lot of sense because we’re trying to train a neural network that takes in the state observations and outputs an action. this is closely related to what a controller is doing in a control system. we’ll call this neural network the actor because it is directly telling the agent how to act.
the structure looks pretty straightforward, so now the question is, how do we approach training this type of network? to get a general feel for how this is done, let’s look at the atari game breakout.
if you’re not familiar, breakout is this game where you are trying to eliminate bricks using a paddle to direct a bouncing ball. this game only has three actions: move the paddle left, right, or not at all, and a near-continuous state space that includes the position of the paddle, the position and velocity of the ball, and the location of the remaining bricks.
for this example, we’ll say the observation is a screen capture of the game, feeding in one frame at a time, and therefore there is one input into our neural net for every pixel. now, with the network set there are many approaches to training it, but i’m going to highlight one broad approach that a lot of variations, and those are policy gradient methods. policy gradient methods can work with a stochastic policy, which means that rather than a deterministic—take a left or right—the policy would output the probability of taking a left or the probability of taking a right. a stochastic policy takes care of the exploration/exploitation problem because exploring is built into the probabilities. now, when we learn, the agent just needs to update the probabilities. is taking a left a better option than right? push the probability that you take a left in this state a little bit higher. then over time, the agent will nudge these probabilities around in the direction that produces the most reward.
so how does it know whether the actions were good or not? the idea is this: execute the current policy, collect reward along the way, and update the network to increase the probabilities of actions that increased reward. if the paddle went left, missing the ball, causing a negative reward? then change the neural network to increase the probability of moving the paddle right next time the agent is in that state. essentially, it’s taking the derivative of each weight and bias with respect to reward and adjusting them in the direction of positive reward increase. in this way, the learning algorithm is moving the weights and biases of the network to ascend up the reward slope. this is why the term gradient is used in the name.
the math behind this is more than i want to go into in this video, but i’d encourage you to read up on the policy gradient theorem to see how it’s possible to find the gradient without actually taking a derivative. i’ve left a link in the description.
one of the downsides of policy gradient methods is that the naive approach of just following the direction of steepest ascent can converge on a local maxima rather than global. they also can converge slowly due to their sensitivity to noisy measurements, which happens, for example, when it takes a lot of sequential actions to receive a reward and the resulting cumulative reward has high variance between episodes. imagine having to string together hundreds or thousands of sequential actions before the agent ever receives a single reward. you could see how time consuming it could be to train an agent using that extremely sparse reward system. keep that in mind, and we’ll come back to that problem by the end of this video.
let’s move on to value function-based learning algorithms. for these, let’s start with an example using the popular grid world as our environment. in this environment there are two discrete state variables: the x grid location and the y grid location. there is only one state in grid world that has a positive reward, and the rest have negative rewards. the idea is that we want our agent to collect the most reward, which means getting to that positive reward in the fewest moves possible. the agent can only move one square at a time either up, down, left, or right and to us powerful humans, it’s easy to see exactly which way to go to get to the reward. however, we have to keep in mind that the agent knows nothing about the environment. it just knows that it can take one of four actions and it gets its location and reward back from the environment after it takes an action.
with a value function-based agent, a function would take in the state and one of the possible actions from that state, and output the value of taking that action. the value is the sum of the total discounted rewards from that state and on like we talked about in the first video. in this way, the policy would simply be to check the value of every possible action and choose the one with the highest value. we can think of this function as a critic since it’s looking at the possible actions and it’s criticizing the agent’s choices. since there are a finite number of states and actions in grid world, we could use a lookup table to represent this function. this is called the q-table, where there is a value for every state and action pairing.
so how does the agent learn these values? well, at first we can initialize it to all zeroes, so all actions look the same to the agent. this is where the exploration rate epsilon comes in and it allows the agent to just take a random action. after it takes that action, it gets to a new state and collects the reward from the environment. the agent uses that reward, that new information, to update the value of the action that it just took. and it does that using the famous bellman equation.
the bellman equation allows the agent to solve the q-table over time by breaking up the whole problem into multiple simpler steps. so rather than solving the true value of the state/action pair in one step, through dynamic programming, the agent will update the state/action pair each time it’s visited. let me try to describe this equation in words so that hopefully it makes some sense.
after the agent has taken an action, it receives a reward. value is more than the instant reward from an action; it’s the maximum expected return into the future. therefore, the value of the state/action pair is the reward that the agent just received, plus how much reward the agent expects to collect going forward. and we discount the future rewards by gamma so that, as we talked about, the agent doesn’t rely too much on rewards far in the future. this is now the new value of the state/action pair, s, a. and so we compare this value to what was in the q-table to get the error, or how far off the agent’s prediction was. the error is multiplied by a learning rate and the resulting delta value is added to the old estimate.
when the agent finds itself back in the same state a second time, it’s going to have this updated value and it will tweak them again when it chooses the same action. and it’ll keep doing this over and over again until the true value of every state/action pair is sufficiently known to exploit the optimal path.
let’s extend this idea to an inverted pendulum where there are still two states, angle and angular rate, except now the states are continuous. value functions can handle continuous state spaces, except not with a lookup table. so we’re going to need a neural network. the idea is the same. we input the state observations and an action, and the neural network returns a value.
you can see that this setup doesn’t work well for continuous action spaces, because how could you possibly try every infinite action and find the maximum value? even for a large action space this becomes computationally expensive, but i’m getting ahead of myself. for now, let’s just say that the action space is discrete and the agent can choose one of 5 possible torque values. that seems reasonable.
so here’s the idea. when you feed the network the observed state and an action it’ll return a value, and our policy would again be to check the value for every possible action and take the one with the highest value. just like with grid world, our neural network would be initially set to junk values and the learning algorithm would use a version of the bellman equation to determine what the new value should be and update the weights and biases in the network accordingly. and once the agent has explored enough of the state space, then it’s going to have a good approximation of the value function, and can select the optimal action, given any state.
that’s pretty neat, right? but we did encounter a drawback in that the action space needs to be relatively small. but often in control problems we have a continuous action space—like being able to apply a continuous range of torques to an inverted pendulum problem.
this brings us to a merging of the two techniques into a class of algorithms called actor/critic. the actor is a network that is trying to take what it thinks is the best action given the current state, just like we had with the policy function method, and the critic is a second network that is trying to estimate the value of the state and the action that the actor took, like we had with the value-based methods. this works for continuous action spaces because the critic only needs to look at a single action, the one that the actor took, and not try to find the best action by evaluating all of them.
here’s basically how it works. the actor chooses an action in the same way that a policy function algorithm would, and it’s applied to the environment. the critic estimates what it thinks the value of that state and action pair is and then it uses the reward from the environment to determine how accurate its value prediction was. the error is difference between the new estimated value of the previous state and the old value of the previous state from the critic network. the new estimated value is based on the received reward and the discounted value of the current state. it can use the error as a sense of whether things went better or worse than the critic expected.
the critic uses this error to update itself in the same way that the value function would so that it has a better prediction the next time it’s in this state. the actor also updates itself with the response from the critic and, if available, the error term so that it can adjust its probabilities of taking that action again in the future.
that is, the policy now ascends the reward slope in the direction that the critic recommends rather than using the rewards directly.
so both the actor and the critic are neural networks that are trying to learn the optimal behavior. the actor is learning the right actions using feedback from the critic to know what a good action is and what is bad, and the critic is learning the value function from the received rewards so that it can properly criticize the action that the actor takes. this is why you might have to set up two neural networks in your agent; each one plays a very specific role.
with actor-critic methods, the agent can take advantage of the best parts of policy and value function algorithms. actor-critics can handle both continuous state and action spaces, and speed up learning when the returned reward has high variance. i’ll show an example of using an actor-critic-based algorithm to get a bipedal robot to walk in the next video.
before i close out this video, i want to briefly discuss the last step in the rl workflow: deploying the algorithm on the target hardware. so far the agent has learned offline by interacting with a simulated environment. but once the policy is sufficiently optimal, then the learning would stop and the static policy would be deployed onto the target just like you would any developed control law. however, we also have the ability to deploy the reinforcement learning algorithms along with the policy and can continue learning on the target with the actual environment. this is important for environments that are hard to model accurately or that are slowly changing over time and therefore the agent needs to continue to learn occasionally so that it can adjust to those changes.
okay, that’s where i’m going to leave this video for now. if you don’t want to miss any future tech talk videos, don’t forget to subscribe to this channel. also, if you want to check out my channel, control system lectures, i cover more control topics there as well. thanks for watching, and i’ll see you next time.
related products
learn more
featured product
reinforcement learning toolbox
up next:
related videos:
您也可以从以下列表中选择网站:
如何获得最佳网站性能
选择中国网站(中文或英文)以获得最佳网站性能。其他 mathworks 国家/地区网站并未针对您所在位置的访问进行优化。
美洲
- (español)
- (english)
- (english)
欧洲
- (english)
- (english)
- (deutsch)
- (español)
- (english)
- (français)
- (english)
- (italiano)
- (english)
- (english)
- (english)
- (deutsch)
- (english)
- (english)
- switzerland
- (english)
亚太
- (english)
- (english)
- (english)
- 中国
- (日本語)
- (한국어)