path planning with a* and rrt | autonomous navigation, part 4 -凯发k8网页登录
from the series: 自主导航
brian douglas
this video explores some of the ways that we can use a map like a binary occupancy grid for motion and path planning. we briefly cover what motion planning means and how we can use a graph to solve this planning problem. we then walk through two popular approaches for creating that graph: search-based algorithms like a* and sampling-based algorithms like rrt and rrt*.
in the last video, we covered simultaneous localization and mapping. and, as the name suggests, we ended up with a map of the environment in the form of a binary occupancy grid. for this video, we’re going to explore some of the ways that we can use a map like this for motion planning, that is finding a trajectory through this environment that connects a robot’s starting state to some goal state. we’re going to briefly cover what motion planning means and how we can use a graph to solve this planning problem, and then we’ll cover two popular approaches for creating that graph - search-based algorithms like a* and sampling-based algorithms like rrt and rrt*. i think this video will sort of nicely set a base understanding of how we can use graphs to plan a trajectory through a known environment. and it’s something on which you can build on as you go out and learn more about these planning methods. so, i hope you stick around for it. i’m brian, and welcome to a matlab tech talk.
we want to find a path from a starting pose to a goal pose. if we’re talking about a robot that moves along the ground, there may be three states that make up its pose, x and y location and its orientation.
a path is a sequence of pose states that smoothly connect the start and the goal. determining this sequence is called path planning. now, path planning is just a subset of the larger motion planning problem. with motion planning, we’re not just concerned with the sequence of poses, but also their derivatives, like velocity, acceleration, rotation rate and so on. so, with motion planning, we are trying to dictate precisely how the robot moves through the environment. with path planning, we are just concerned with the path that it takes and not how fast it accelerates or moves through it.
of course the size of the pose vector depends on the specifics of your system and environment. instead of 3 states, it could be just two, x and y if the robot is omnidirectional and orientation doesn’t matter. or, in the case of a robotic arm with multiple actuators, the pose could consist of dozens of states.
for this video, the examples i’m going to use focus on path planning and for a robot that is omnidirectional, so just two pose states. this will simplify the explanation and, hopefully, you’ll be able to see that these techniques can be extrapolated to higher dimensional systems.
alright, let’s get started with a simple map that is similar to what we generated in the last video. it’s just a rectangle.
assume, the starting pose is here, and the goal pose is over here. a minimum-distance solution can be solved for directly by connecting the start and goal with a straight line as long as there are no obstacles in the way. if the robot moves along this path, it will reach the goal in the shortest distance possible.
now, analytically solving for the shortest path like this is trivial for our simple environment. and this type of solution could even work for environments with some obstacles and constraints as well.
but for many problems the obstacles and the dynamics of the system are too complex to generate an optimal solution analytically. so, we approach it by solving the problem numerically. and as i said at the beginning of this video, we’re going to focus on graph-based methods to numerically find the path with the shortest length.
before we get into any particular algorithm, let me show you the general idea behind graph-based solutions with this simple map. graph-based algorithms work by discretizing the environment - that is breaking it up into discrete points or node - and then finding the shortest distance to the goal considering only these nodes.
let’s approach this problem in a random way. the starting location is the first node in our graph and it has a cost of 0 since we’re already there. so i’ll put a zero inside of the node. then we can move in a random direction and place a node in the graph at our new location. the edge between nodes is how far we traveled and the cost of getting to this node is the length of that one edge. now we take another step in a random direction, place another edge, and the cost of this node is a total of 3 units. we can continue to do this, on a sort of random walk, until we get to the goal. the cost of this particular random path is 10 units. we’ve found a path that works, even though it’s definitely not the shortest path.
so, we can start again, taking a new random step, adding a node and connecting it with an edge and recording its cost. and if we happen to reach a node that we visited before, we can compare the cost between the two different paths that we took to reach that node and keep the smallest one. basically, revising our estimate of how many units it takes to reach that node.
we now have this graph of nodes, or locations on the map, and how much it costs to get to each node. we should recognize here that we don’t actually need to build a fully interconnected graph, just a tree, which is a subset of a graph. a graph can have nodes connected in anyway you want, but in a tree each node only has a single parent. if you can get to a node two different ways, it doesn’t make sense to keep the path that is longer if you’re looking for the shortest path. so, you can remove the edge for the longer path, keeping a tree structure.
in this way, a tree would start at the location of the robot and the branches would venture out to other states, but never recombine.
now, to find a shorter distance to the goal, we can just keep randomly wandering the environment, updating the tree until we find a branch that gets to the goal with a cost that is low enough. this doesn’t guarantees an optimal path but it will continue to approach optimal as the number of nodes goes to infinity.
of course, building up a tree through random wandering is not the best solution. so, this is where path planning algorithms come in, they provide more efficient ways to build this tree.
i want to start with the so-called search-based methods, that build up the tree by adding nodes in an ordered pattern. one way to accomplish this in practice is to start with a grid-based map, like the occupancy grid that we have and go cell by cell and determine the cost, or the distance the robot would have to go in order to reach that cell. here, i’m claiming adjacent moves are 10 and diagonal moves are 14. this is similar to the random approach we just did, except we methodically work our way through each cell, and calculating the cost to reach it, and if it’s the shortest path to that node updating the cost and the tree. once we’ve covered every single cell in the grid, the optimal path is simply the sequence of cells that produce the minimum cost at the goal.
this will produce an optimal solution, at least optimal at the resolution of the grid, but you can see that this would be computationally expensive since it’s kind of a brute force method of checking every possible node. to improve this, researches came up with the a* algorithm in 1968 to give shakey the robot the ability to determine where to go on its own - a first for general-purpose mobile robots. this search-based method still add nodes in an ordered way, but it does so by prioritizing the nodes that are more likely to produce the optimal path, and searching there first. it does this by keeping track of some other heuristic like the straight line distance from a node to the goal, in addition to the cost of the node. the sum of these two numbers is the absolute minimum cost of the path. if there was a straight line shot to the goal, then you could imagine how the total path length for, say, this node would be 48. we’ve already gone 10 and we have a minimum of 38 left to go. therefore, this other node should be prioritized, even though its current cost is 14, since there is the potential of only having 28 more to go for a total of 42, it makes more sense to keep trying this path.
so, in this way, a* allows us to search through the nodes in a way that will get us to the goal without necessarily having to add every node into our tree. in fact, once we get to the goal we know that we took the optimal path since every other path would have a cost plus distance to go that is greater than the path we found.
now, this was a fast introduction to a* and i was going to make an animation that shows how this still works well in the presence of obstacles, but honestly i couldn’t do better than what sebastian lague (leg-you) already has on his youtube channel which i’ve linked below. his video and animation on a* is amazing and i recommend you check it out if you’d like to know more about this method.
alright, so search-based algorithms gives us a way to build up a tree by adding nodes in some kind of ordered pattern. but one problem with these types of algorithms, even efficient ones like a*, is that they become very computationally expensive as the size and dimension of the state space increases. you can imagine how the number of grid-points grow exponentially as the number of dimensions increase. which can slow everything down. so, they tend to not be used for high dimensional state spaces, like determining the path for multi-jointed robot manipulators, or for really large low-dimensional state spaces, one that might have millions or more grid cells. this is where the so-called sampling-based algorithms are useful.
to understand how sampling-based algorithms work, i think first it’s helpful to realize that in our map, and probably most maps, there are sections where a path could continue in a single direction for some distance before it needs to make a turn. with a* we have to calculate every single grid cell between these two points - so multiple nodes in the tree. however, if we only checked the far distant node, and there weren’t any obstacles in the way, then we could calculate the straight line cost for just this one node. this reduces the number of nodes, and therefore the number of total calculations.
so the question becomes, how do we pick the location of these sparse nodes so that we still reach the goal? and one answer is to randomly select them, or sample them, hence the name. now, i know i said that randomly selecting nodes is not the best approach to build out a tree, but we’re going to choose a random node location a little bit differently. rather than extending the path through some kind of random walk, which could allow the path to circle back on itself and take a long time to explore in the direction of the goal, we’re going to be smarter about randomization and focus on rapidly exploring random trees (rrt) and a version of it that can approach an optimal solution called rrt*.
let’s go into how the basic rrt algorithm works. from the starting node, we need to place a new node in our tree. rrt does this by randomly selecting a node anywhere in the state space. once we have this random node, we want to connect it to the nearest node to it in our tree. which, since we're just starting is the first node. but we don’t want to place it too far away because the chance of the path crossing an obstacle or just traveling too far in the wrong direction is greater with a longer edge. so, we specify a maximum distance that the new node can be away from the nearest node.
now, a quick aside. if the random node is closer than the max distance then we place the new node right there. also, if there is an obstacle between the nearest node and where we want to place the new node, then that one is ignored completely, nothing is added to the tree, and we move on. this keeps the tree from attempting to cross through any walls or other obstacles.
ok, so let’s move on. we randomly select a new node and find which existing node its nearest to. again, this happens to be the starting node. so we already have two paths in our tree and both are venturing out into different parts of the state space. a new random node, and again we connect it to the nearest, which is growing one path even further into the open space. this is why this algorithm is called rapidly exploring random trees. when there are large unexplored areas, like we have, there is a good chance that a random node is selected in that area. this has the effect of adding new nodes in places that cause the paths to venture into unexplored areas quickly at the beginning, hence rapidly exploring. and a random node that is in a direction opposite of the goal doesn’t affect the path that is on its way to the goal since a different node would be nearest. in this way, one path is always rapidly making its way toward the goal and the others are rapidly making their way into other open areas of the state space. and then later on, as the unexplored area starts to fill up, the random node selection tends to just fill the tree out with more branches.
we can continue doing this process until the path gets to within some threshold of the goal. at that point, we have a viable path, albeit probably not an optimal one since this method tends to zig zag as it makes its way. but we did find a solution, and importantly a solution that likely uses far fewer nodes than would have been necessary for a* since the nodes can be spaced further apart. again we set that with the maximum connection distance.
rrt can work really well for situations where you’re just looking for a valid path and not necessarily the shortest distance. as long as the start and goal nodes are reachable, this method is guaranteed to find a path as the number of samples approaches infinity - and in most cases a much much smaller finite number of samples.
now, if we do want a solution that is closer to optimal, we have to add some additional steps into the rrt algorithm and we get rrt*. for rrt*, the node selection process is exactly the same. we choose a random node, find the nearest neighbor, and if there is no obstacle in the way we place a new node at the random node or at the max connection distance - whichever is smaller. the difference comes from where we connect this node to the existing tree. we don’t necessarily connect it to the nearest node. instead, we check for other nodes within some specified search radius, and determine if we can reconnect these local nodes in a way that maintains the tree structure but also minimizes the total path length. so, here, i’m connecting it to this other node since that will produce a shorter path back to the start.
let’s watch rrt* work with a slightly more complex environment. here, i’m using matlab to do this visualization and the rrt* function from the navigation tool box. at the beginning, you can see that it’s rapidly exploring the environment. and doing so with just a couple dozen nodes. so far, it’s not different than rrt, but let me pause it. this next node is going to appear here. if this was rrt, it would connect that node to this one, since its nearest. but watch what happens. that node appears and it’s connected all the way down here since that is a shorter path back to the start. and it reconnected some of the other nodes within the search radius. so rrt* is always trying to shorten the paths. let’s continue.
here it reaches the goal and just like with rrt the path is a bit zig zaggy, at first. but as we continue adding more nodes, and the existing paths are reconnected, you can see how they get refined over time, becoming shorter and straighter. we can stop adding nodes whenever we’re happy with the result. not only do we have a near-optimal path to the goal, but our tree has generated near optimal paths to anywhere in the environment. at least, as long as the environment doesn’t change. this is the power of rrt*.
now again, this was a super quick introduction to sampling-based algorithms, and i’ve left better sources of information in the description of this video. that’s where i’m going to leave it for now with just the briefest of introductions. the idea here, was not to tell you everything you need to know about path planning as there are dozens of variations of just rrt alone, but hopefully you have a sense of how a tree can help us plan a path through an environment, and some of the search-based and sampling-based ways we can approach building that tree.
in this video, we dealt with planning a path through a static environment. however, often other objects and obstacles are moving through the environment as well and planning algorithms need to react to those changes. part of solving this problem is tracking extended objects - these are objects that are large enough that a sensor returns multiple detections of it. and that is what we’re going to cover in the next video.
if you don’t want to miss that or any other future tech talk videos, don’t forget to subscribe to this channel. and you want to check out my channel, control system lectures, i cover more control theory topics there as well. thanks watching, and i’ll see you next time.
您也可以从以下列表中选择网站:
如何获得最佳网站性能
选择中国网站(中文或英文)以获得最佳网站性能。其他 mathworks 国家/地区网站并未针对您所在位置的访问进行优化。
美洲
- (español)
- (english)
- (english)
欧洲
- (english)
- (english)
- (deutsch)
- (español)
- (english)
- (français)
- (english)
- (italiano)
- (english)
- (english)
- (english)
- (deutsch)
- (english)
- (english)
- switzerland
- (english)
亚太
- (english)
- (english)
- (english)
- 中国
- (日本語)
- (한국어)