the walking robot problem | reinforcement learning, part 4 -凯发k8网页登录
from the series: reinforcement learning
brian douglas
this video shows how to use the reinforcement learning workflow to get a bipedal robot to walk. it also looks at how to modify the default example to make it look more like how one would set up a traditional control problem by adding a reference signal. it will also consider how an rl-equipped agent can replace parts of a traditional control system rather than an end-to-end design. finally, some of the limitations of this design will be shown.
now that we have an understanding of the reinforcement learning workflow, in this video i want to show how that workflow is put to use in getting a bipedal robot to walk using an rl-equipped agent. we’re going to use the walking robot example from the matlab and simulink robotics arena that you can find on github. i’ve left a link to it in the description. this example comes with an environment model where you can adjust training parameters, train an agent, and visualize the results. in this video, we’ll also look at how we can modify this example to make it look more like how we would set up a traditional control problem and then show some of the limitations of this design. so i hope you stick around for this because i think it’ll help you understand how to use reinforcement learning for typical control applications. i’m brian, and welcome to a matlab tech talk.
let’s start with a quick overview of the problem. the high-level goal is to get a two-legged robot to walk, somewhat like how a human would. our job as designers is to determine the actions to take that correctly move the legs and body of the robot. the actions we can take are the motor torque commands for each joint; there’s the left and right ankle, the left and right knee, and left and right hip joints. so there are six different torque commands that we need to send at any given time.
the robot body and legs, along with the world in which it operates, make up the environment. the observations from the environment are based on the type and locations of sensors as well as any other data that is generated by the software. for this example, we’re using 31 different observations. these are the y and z body position, the x, y, and z body velocity, and the body orientation and angular rate. there are also the angles and angular rates of the six joints, and the contact forces between the feet and the ground. those are the sensed observations. we’re also feeding back the six actions that we commanded in the previous time step, which are stored in a buffer in software. so, in all, our control system takes in these 31 observations and has to calculate the values of six motor torques continuously. so you can start to see how complex the logic has to be for even this really simple system.
as i mentioned in the previous videos, rather than try to design the logic, the loops, the controllers, the parameters, and all of that stuff using the traditional tools of control theory, we can replace this whole massive function, end to end, with a reinforcement learning agent; one that uses an actor network to map these 31 observations to the six actions, and a critic network to make training the actor more efficient.
as we know, the training process requires a reward function—something that tells the agent how it’s doing so that it can learn from its actions. and i want to reason through what should exist in the reward function by thinking about the conditions that are important for a walking robot. this might be how you’d approach building up a reward function if you didn't know where to start. now, i’ll show you the results of training with this function as we create it, so you can see how the changes impact the solution; however, i’m not going to cover how to run the model because there is already a great video by sebastian castro that does that. so if you’re interested in trying all of this out on your own, i’d recommend checking out the link in the description below. all right, so on to the reward.
where to start? we obviously want the body of the robot to move forward; otherwise it will just stand there. but instead of distance, we can reward it for its forward velocity. that way there is a desire for the robot to walk faster rather than slower. after training with this reward, we can see that the robot dives forward to get that quick burst of speed at the beginning and then falls over and doesn’t really make it anywhere. it may eventually figure out how to make it further with this reward, but it was taking a long time to converge and not making a lot of progress, so let’s think about what we can add to help with training. we could penalize the robot for falling down so that diving forward isn’t as attractive. so if it stays on its feet for a longer time, or if more sample times elapse before the simulation ends, then the agent should get more reward.
let’s see how this does. it’s got a bit of hop at the beginning before ultimately falling down again. perhaps if i had let this agent train longer, i could have a robot that jumps across the world like a frog, which is cool but that’s not what i want. it’s not enough that the robot moves forward and doesn’t fall; we want some semblance of walking instead of hopping or crouch walking. so to fix this, we should also reward the agent for keeping the body as close to a standing height as possible.
let’s check out this reward function. okay, this is looking better, but the solution isn’t really natural looking. it stops occasionally to sort of jitter its legs back and forth, and most of the time it’s dragging its right leg like a zombie and putting all of the actuation in the left leg. and this isn’t ideal if we’re concerned with actuator wear and tear or the amount of energy it takes to run this robot. we’d like both legs to do equal work and not to overuse the actuators with a lot of jittering. so to fix this, we can reward the agent for minimizing actuator effort. this should reduce extra jittering and balance the effort so that each leg has a share of the load.
let’s check out our trained agent. okay, we are getting close here. this is looking pretty good. except now we have one final problem. we want to keep the robot moving in a straight line and not veering off to the right or left like it’s doing here, so we should reward it for staying close to the x-axis.
this is our final reward and training with it needs about 3500 simulations. so if we set this up in our model and we unleash the simulation on a computer with multiple cores or a gpu or on a cluster of computers, then after a few hours of training, we’ll have a solution. we’ll have a robot that walks in a straight line in a human-like way.
with the reward function set, let’s move onto the policy. i’ve already stated that the policy is an actor neural network and along with it is a critic neural network. and each of these networks have several hidden layers of hundreds of neurons each, so there are a lot of calculations that go into them. if we don’t have enough neurons then the network will never be able to mimic the high dimensional function that is required to map the 31 observations to the six actions for this nonlinear environment. on the other hand, too many neurons and we spend more time training the excessive logic. in addition, the architecture of the network is really important in functional complexity. these are things like the number of layers, how they’re connected, and the number of neurons in each layer. so there is some experience and knowledge needed to find the sweet spot that makes training possible and efficient.
luckily, as we know, we don’t need to manually solve for the hundred of thousands of weights and biases in our networks. we let the training algorithm do that for us. in this example, we’re using an actor/critic training algorithm called the deep deterministic policy gradient, or ddpg. the reason is because this algorithm can learn with environments with continuous action spaces like we have with the continuous range of torques that we can apply to the motors. also, since it estimates a deterministic policy, it’s much faster to learn than one that learns a stochastic policy.
i know this all sounds fairly complicated and abstract, but the cool thing to me about this is that most of the complexity is there for training the policy. once we have a fully trained agent, then all we have to do is deploy the actor network to the target hardware. remember, the actor is the function that maps observations to actions; it’s the thing that is deciding what to do, it’s the policy. the critic and the learning algorithm are just there to help determine the parameters in the actor.
okay, at this point here’s a question that you might have. sure, we can use rl to get a robot to walk in a straight line; however, won’t this policy do only this one thing? for instance, if i deploy this policy and turn on my robot, it’s just instantly going to start walking straight, forever. so how can i learn a policy that will let me send commands to the robot to walk where i want it to walk?
well let’s think about that. right now, this is what our system looks like. we have the environment that generates the observations and the reward, and then we have the agent that generates the actions. there’s no way for us to inject any outside commands into this system and there’s no way for our agent to respond to them even if we had them. so we need to write some additional logic, outside of the agent, that receives a reference signal and calculates an error term. the error is the difference between the current x position, which we can get from the environment, and the reference value. this is the same error calculation we would have in a normal feedback control system.
now, rather than reward the agent for a higher velocity in the x direction, we can reward it for low error. this should incentivize the robot to walk toward and stay at the commanded x reference value.
for the observations, we need to give the agent a way to view the error term so that it can develop a policy accordingly. since it might help our agent to have access to rate of change of error and maybe other higher derivatives, i’ll feed in the error from the last five sample times. this will allow the policy to create derivatives if it needs to. eventually the policy will be to walk forward at some specified rate if the error is positive, and backwards if it’s negative.
since we now have 36 observations into our agent, we need to adjust our actor network to handle the additional inputs. again, check out sebastian’s video in the description if you need guidance on how to make these changes.
i’ve updated the default model in simulink with the new error term and fed it into the observation block and the reward block. and i’ve trained this agent over thousands of episodes using this particular profile, so it should be really good at following this. but the hope is that the trained policy will be robust enough to follow other profiles as well that have similar rates and accelerations. so let’s give it a shot. i’ll have it walk forward, pause for a bit, and then walk backwards.
it’s kind of funny looking when it walks backwards, but overall a pretty good effort. with some reward tweaking and maybe a little more time spent training, i might be on to something pretty good here.
so, in this way, you can start to see how we can use an rl agent to replace part of the control system. instead a function that learns a single behavior, we can extract the high-level reference signal and have the agent work off error so that we can retain the ability to send commands.
we also can remove low-level functionality from the agent. for example, instead of the actions being the low-level torques for each of the six joints, the agent could just learn where to place its feet on the ground. so the action would be to place left foot at some location in the body coordinate frame. this action could be the reference command for a lower-level traditional control system that drives the joint motors. you know, something that might feedforward a torque command based on your knowledge of the dynamics of the system and feedback some signal to guarantee performance and stability.
this is beneficial because we can use our specific domain knowledge to solve the easy problems, and that will give us insight and control over the design, and then we can reserve reinforcement learning for the problems that are difficult.
something to note about our final walking solution so far is that it’s really only robust to its own state. you know, it can walk around without falling over, which is good, but only in a perfectly flat, featureless plain. it’s not taking into account any part of the world outside the robot so it’s actually quite a fragile design. for example, let’s see what happens if we place a single obstacle in the way of our robot.
well, that went as expected. the problem here is that we didn’t give our agent any way of recognizing the state of the environment beyond the motions of the robot itself. there’s nothing that can sense the obstacles and therefore nothing can be done to avoid them.
but here’s the thing. the beauty of neural network-based agents is that they can handle what we’ll call rich sensors. these are things like lidar and visible cameras, things that don’t produce singular measurements like an angle, but rather return arrays of numbers that represent thousands of distances or pixels of varying light intensities. so we could install a visible camera and a lidar sensor on our robot and feed the thousand or so new values as additional observations into our agent. you can imagine how the complexity of this function needs to grow as our observations increase from 36 to thousands.
we may find that a simple, fully connected network is not ideal and so we may add additional layers that incorporate specialized logic that minimizes connections like convolutional networks or adds memory like recurrent networks. these are network layers that are more adapted to dealing with large image data and more dynamic environments. however, we might not need to change the reward function in order to get the robot to avoid these obstacles. the agent could still learn that straying from the path, and therefore getting a lower reward here, allows the robot to continue walking without falling down, thereby earning more reward overall.
i’ve addressed a few of the problems with reinforcement learning in this video and showed how we can modify the problem by combining the benefits of traditional control design with reinforcement learning. we’re going to expand on this some more in the next video, where we’ll talk about other downsides of reinforcement learning and what we can do to mitigate them.
so if you don’t want to miss that and future tech talk videos, don’t forget to subscribe to this channel. also, if you want to check out my channel, control system lectures, i cover more control topics there as well. thanks for watching, and i’ll see you next time.
您也可以从以下列表中选择网站:
如何获得最佳网站性能
选择中国网站(中文或英文)以获得最佳网站性能。其他 mathworks 国家/地区网站并未针对您所在位置的访问进行优化。
美洲
- (español)
- (english)
- (english)
欧洲
- (english)
- (english)
- (deutsch)
- (español)
- (english)
- (français)
- (english)
- (italiano)
- (english)
- (english)
- (english)
- (deutsch)
- (english)
- (english)
- switzerland
- (english)
亚太
- (english)
- (english)
- (english)
- 中国
- (日本語)
- (한국어)