what is lqr optimal control? | state space, part 4 video -凯发k8网页登录
from the series:
brian douglas
lqr is a type of optimal control based on state-space representation. in this video, we introduce this topic at a very high level so that you walk away with an understanding of the control problem and can build on this understanding when you are studying the math behind it. this video will cover what it means to be optimal and how to think about the lqr problem. at the end, i’ll show you some examples in matlab® that will help you gain a little intuition about lqr.
let’s talk about the linear quadratic regulator, or lqr control. lqr is a type of optimal control that is based on state-space representation. in this video, i want to introduce this topic at a very high level so that you walk away with a general understanding of the control problem and can build on this understanding when you are studying the math behind it. i’ll cover what it means to be optimal, how to think about the lqr problem, and then i’ll show you some examples in matlab that i think will help you gain a little intuition about lqr. i’m brian, and welcome to a matlab tech talk.
to begin, let’s compare the structure of the pole placement controller that we covered in the second video and an lqr controller. that way you have some kind of an idea of how they’re different. with pole placement, we found that if we feed back every state in the state vector and multiply them by a gain matrix, k, we have the ability to place the closed-loop poles anywhere we choose, assuming the system is controllable and observable. then we scaled the reference term to ensure we have no steady state reference tracking error.
the lqr structure, on the other hand, feeds back the full state vector, then multiplies it by a gain matrix k, and subtracts it from the scaled reference. so, as you can see, the structure of these two control laws are completely diff—well, actually, no, they’re exactly the same. they are both full-state feedback controllers and we can implement the results with the same structure from both lqr and pole placement.
a quick side note about this structure: we could have set it up to feed back the integral of the output or we could have applied the gain to the state error. all three of these implementations can produce zero steady state error and can be used with the results from pole placement or lqr. and if you want to learn more about these other two feedback structures, i left a good source in the description.
okay, we’re back. so why are we giving these two controllers different names if they are implemented in the exact same way? well, here’s the key. the implementation is the same, but how we choose k is different.
with pole placement, we solved for k by choosing where we want to put the closed-loop poles. we wanted to place them in a specific spot. this was awesome! but one problem with this method is figuring out where a good place is for those closed-loop poles. this might not a terribly intuitive answer for high-order systems and systems with multiple actuators.
so with lqr, we don’t pick pole locations. we find the optimal k matrix by choosing closed-loop characteristics that are important to us—specifically, how well the system performs, and how much effort does it take to get that performance. that statement might not make a lot of sense so let’s walk through a quick thought exercise that i think will help.
i’m borrowing and modifying this example from christopher lum, who has his own video on lqr that is worth watching if you want a more in-depth explanation of the mathematics. i’ve linked to his video in the description. but here’s the general idea:
let’s say you’re trying to figure out the best way or the most optimal way to get from your home to work. and you have several transportation options to choose from. you could drive your car, you could ride your bike, take the bus, or charter a helicopter. and the question is, which is the most optimal choice? that question by itself can’t be answered because i haven’t told you what a good outcome means. all of those options can get us from home to work, but they do so differently and we need to figure out what’s important to us. if i said time is the most important thing, get to work as fast as possible, then the optimal solution would be to take the helicopter. on the other hand, if i said that you don’t have much money and getting to work as cheaply as possible was a good outcome, then riding your bike would be the optimal solution.
of course, in real life you don’t have infinite money to maximize performance and you don’t have unlimited time to minimize spending, but rather you’re trying to find a balance between the two. so maybe you’d reason that you have an early meeting and therefore value the time it takes to get to work, but you’re not independently wealthy, so you care about how much money it takes. therefore, the optimal solution would be to take your car or to take the bus.
now if we wanted a fancy way to mathematically assess which mode of transportation is optimal, we could set up a function that adds together the travel time and the amount of money that each option takes. and then we can set the importance of time versus money with a multiplier. we’ll weight each of these matrices based on our own personal preferences. we’ll call this the cost function, or the objective function, and you can see that it’s heavily influenced by these weighting parameters. if q is high, then we are penalizing options that take more time, and if r is high, then we are penalizing options that cost a lot of money. once we set the weights, we calculate the total cost for each option and choose the one that has the lowest overall cost. this is the optimal solution.
what’s interesting about this is that there are different optimal solutions based on the relative weights you attach to performance and spending. there is no universal optimal solution, just the best one given the desires of the user. a ceo might take a helicopter, whereas a college student might ride a bicycle, but both are optimal given their preferences.
and this is exactly the same kind of reasoning we do when designing a control system. rather than think about pole locations, we can think about and assess what is important to us between how well the system performs and how much we want to spend to get that performance. of course, usually how much we want to spend is not measured in dollars but in actuator effort, or the amount of energy it takes.
and this is how lqr approaches finding the optimal gain matrix. we set up a cost function that adds up the weighted sum of performance and effort overall time and then by solving the lqr problem, it returns the gain matrix that produces the lowest cost given the dynamics of the system.
now the cost function that we use with lqr looks a little different than the function we developed for the travel example, but the concept is exactly the same; we penalize bad performance by adjusting q, and we penalize actuator effort by adjusting r.
so let’s look at what performance means for this cost function. performance is judged on the state vector. for now, let’s assume that we want every state to be zero, to be driven back to its starting equilibrium point. so if the system is initialized in some nonzero state, the faster it returns to zero, the better the performance is and the lower the cost. and the way that we can get a measure of how quickly it’s returning to the desired state is by looking at the area under the curve. this is what the integral is doing. a curve with less area means that it spends more time closer to the goal than a curve with more area.
however, states can be negative or positive and we don’t want negative values subtracting from the overall cost, so we square the value to ensure that it’s positive. this has the effect of punishing larger errors disproportionately more than smaller ones, but it’s a good compromise because it turns our cost function into a quadratic function. quadratic functions, like z = x^2 y^2 are convexed, and therefore, have a definite minimum value. and quadratic functions that are subject to linear dynamics remain quadratic so our system will also have a definite minimum value.
lastly, we want to have the ability to weight the relative importance of each state. and therefore, q isn’t a single number but a square matrix that has the same number of rows as states. the q matrix needs to be positive definite so that when we multiply it with the state vectors, the resulting value is positive and nonzero. and often it’s just a diagonal matrix with positive values along the diagonal. with this matrix, we can target the states where we want really low error by making the corresponding value in the q matrix really large, and the states that we don’t care about as much make those values really small.
the other half of the cost function adds up the cost of actuation. in a very similar fashion, we look at the input vector and we square the terms to ensure they’re positive, and then weight them with an r matrix that has positive multipliers along its diagonal.
we can write this in larger matrix form as follows, and while you don’t see the cost function written like this often, it helps us visualize something. q and r are part of this larger weighting matrix, but the off diagonal terms of this matrix are zero. we can fill in those corners with n, such that the overall matrix is still positive definite but now the n matrix penalizes cross products of the input and the state. while there are uses for setting up your cost function with an n matrix, for us we’re going to keep things simple and just set it to zero and focus only on q and r.
so by setting the values of q and r, we now have a way to specify exactly what’s important to us. if one of the actuators is really expensive and we’re trying to save energy, then we penalize it by increasing the r matrix value that corresponds with it. this might be the case if you’re using thrusters for satellite control because they use up fuel, which is a finite resource. in that case, you may accept a slower reaction or more state error so that you can save fuel.
on the other hand, if performance is really crucial, then we can penalize state error by increasing the q matrix value that corresponds with the states we care about. this might be the case when using reaction wheels for satellite control because they use energy that can be stored in batteries and replenished with the solar panels. so using more energy for low-error control is probably a good tradeoff.
so now the big question: how do we solve this optimization problem? and the big disappointing answer is that deriving the solution is beyond the scope of this video. but i left a good link in the description if you want to read up on it.
the good news, however, is that as a control system designer, often the way you approach lqr design is not by solving the optimization problem by hand, but by developing a linear model of your system dynamics, then specifying what’s important by adjusting the q and r weighting matrices, then running the lqr command in matlab to solve the optimization problem and return the optimal gain set, and then just simulate the system and adjust q and r again if necessary. so as long as you understand how q and r affects the closed-loop behavior, how they punish state errors and actuator effort, and you understand that this is a quadratic optimization problem, then it’s relatively simple to use the lqr command in matlab to find the optimal gain set.
with lqr, we’ve moved the design question away from where do we place poles, to the question, how do we set q and r. unfortunately, there isn’t a one-size-fits-all method for choosing these weights; however, i’d argue that setting q and r is more intuitive than picking pole locations. for example, you can just start with the identity matrix for both q and r and then tweak them through trial and error and intuition about your system. so, to help you develop some of that intuition, let’s walk through a few examples in matlab.
all right, this needs a little explanation. let’s start with the code. i have a very simple model of a rotating mass in a frictionless environment and the system has two states, angle and angular rate. i’m designing a full-state feedback controller using lqr, and it really couldn’t be simpler. i’ll start with the identity matrix for q where the first diagonal entry is tied to the angular error and the second is tied to angular rate. there is only a single actuation input for this system, which are four rotation thrusters that all act together to create the single torque command. therefore, r is just a single value.
now i solve for the optimal feedback gain using the lqr command and build a state-space object that represents the closed-loop dynamics. with the controller designed, i can simulate the response to an initial condition, which i’m setting to 3 radians. that’s pretty much the whole thing. everything else in this script just makes this fancy plot so it’s easier to comprehend the results.
okay, let’s run this script. you can see the ufo gets initialized to 3 radians as promised. up at the top i’m keeping track of how long the maneuver takes which is representative of the performance, and how much fuel is used to complete the maneuver. so let’s kick it off and see how well the controller does.
look at that, it completed the maneuver in 5.8 seconds with 15 units of fuel and got the cow in the process, which is the important part. when the thrusters are active, they generate a torque that accelerates the ufo over time. therefore, fuel usage is proportional to the integral of acceleration. so the longer we accelerate, the more fuel is used.
now let’s see if we can use less fuel for this maneuver by penalizing the thruster more. i’ll bump r up to 2 and rerun the simulation.
well, we used 2 fewer units of fuel, but at the expense of over 3 additional seconds. the problem is that with this combination, it overshot the target just a bit and had to waste time coming back. so let’s try to slow down the max rotation speed with the hope that it won’t overshoot. we’re going to do that by penalizing the angular rate portion of the q matrix. now, any non-zero rate costs double what it did before. let’s give it a shot.
well, we saved about a second since it didn’t overshoot, and in the process managed to knock off another unit of fuel. all right, enough of this small stuff. let’s really save fuel now by relaxing the angle error weight a bunch.
okay, this going really slowly now. let me speed up the video to just get through it. in the end, we used 5 units of fuel, less than half of what was used before. and we can go the other way as well and tune a really aggressive controller.
yes, that’s much faster. less than 2 seconds and our acceleration is off the charts. that’s how you rotate to pick up a cow. unfortunately, it’s at the expense of almost 100 units of fuel, so there’s down sides to everything. all right, so hopefully, you’re starting to see how we can tweak and tune our controller by adjusting these two matrices. and it’s pretty simple.
now, i know this video is dragging on, but with a different script i want to show you one more thing real quickly, and that’s how lqr is more powerful than pole placement. here, i have a different state-space model, one that has three states and a single actuator. i’ve defined my q and r matrices and solved for the optimal gain. and like before, i’ll generate the closed-loop state-space model and then run the response to an initial condition of 1, 0, 0. i then plot the response of the first state, that step from 1 back to 0; the actuator effort; and the location of the closed-loop poles and zeroes.
so let’s run this and see what happens. well, the first state tracks back to 0 nicely, but at the expense of a lot of actuation. i didn’t model anything in particular but let’s say the actuator effort is thrust required. so this controller is requesting 10 units of thrust. however, let’s say our thruster is only capable of 2 units of thrust. this controller design would saturate the thruster and we wouldn’t get the response we’re looking for. now, had we developed this controller using pole placement, the question at this point would be which of these three poles should we move in order to reduce the actuator effort? and that’s not too intuitive, right?
but with lqr, we can easily go to the r matrix and penalize actuator usage by raising a single value. and i’ll rerun the script. we see that the response is slower, as expected, but the actuator is no longer saturated. and, check this out, all three closed-loop poles moved with this single adjustment of r. so if we were using pole placement, we would have had to know to move these poles just like this in order to reduce actuator effort. that would be pretty tough.
so that’s where i want to leave this video. lqr control is pretty powerful and hopefully you saw that it’s simple to set up and relatively intuitive to tune and tweak. and the best part is that it returns an optimal gain matrix based on how you weight performance and effort. so it’s up to you how you want your system to behave in the end.
if you don’t want to miss the next tech talk video, don’t forget to subscribe to this channel. also, if you want to check out my channel, control system lectures, i cover more control theory topics there as well. thanks for watching. i’ll see you next time.
related products
learn more
您也可以从以下列表中选择网站:
如何获得最佳网站性能
选择中国网站(中文或英文)以获得最佳网站性能。其他 mathworks 国家/地区网站并未针对您所在位置的访问进行优化。
美洲
- (español)
- (english)
- (english)
欧洲
- (english)
- (english)
- (deutsch)
- (español)
- (english)
- (français)
- (english)
- (italiano)
- (english)
- (english)
- (english)
- (deutsch)
- (english)
- (english)
- switzerland
- (english)
亚太
- (english)
- (english)
- (english)
- 中国
- (日本語)
- (한국어)