robot perception improved with gnc algorithm -凯发k8网页登录
new algorithm boosts robustness of robot perception
when vasileios tzoumas, a research scientist at the (mit), visits a new city, he likes to explore by going for a run. and sometimes he gets lost. a few years ago, on a long run while in osaka for a conference, the inevitable happened. but then tzoumas spotted a 7-eleven he remembered passing soon after leaving his hotel. this recognition allowed him to mentally “close the loop,” to connect the loose end of his trajectory to someplace he knew and was sure about, thus solidifying his mental map and allowing him to make his way back to the hotel.
the graduated nonconvexity (gnc) algorithm could help machines traverse land, water, sky, and space—and come back to tell the tale.
closing the loop is actually a technical term for something robots frequently have to do when navigating new environments. it’s part of a process called simultaneous localization and mapping (slam). slam is not new. it is used for robotic vacuum cleaners, self-driving cars, search-and-rescue aerial drones, and robots in factories, warehouses, and mines. as autonomous devices and vehicles navigate new spaces, from a living room to the sky, they construct a map as they travel. they must also figure out where they are on the map using sensors such as cameras, gps, and lidar.
as slam finds more applications, it is more important than ever to ensure that slam algorithms produce correct results in challenging real-world conditions. slam algorithms often work well with perfect sensors or in controlled lab conditions, but they get lost easily when implemented with imperfect sensors in the real world. unsurprisingly, industrial customers frequently worry about whether they can trust those algorithms.
researchers at mit have developed several robust slam algorithms, as well as methods to mathematically prove how much we can trust them. the lab of luca carlone, the leonardo career development assistant professor at mit, about their graduated non-convexity (gnc) algorithm, which reduces the random errors and uncertainties in slam results. more importantly, the algorithm produces correct results where existing methods “get lost.” the paper, by carlone, tzoumas, and carlone’s students heng yang and pasquale antonante, received the best paper award in robot vision at the international conference on robotics and automation (icra). this gnc algorithm will help machines traverse land, water, sky, and space—and come back to tell the tale.
everything’s aligned
robot perception relies on sensors that often provide noisy or misleading inputs. mit’s gnc algorithm allows the robot to decide which data points to trust and which to discard. one application of the gnc algorithm is called shape alignment. a robot estimates the 3d location and orientation of a car using 2d camera images. the robot receives a camera image with many points labeled by a feature-detection algorithm: headlights, wheels, mirrors. it also has a 3d model of a car in its memory. the goal is to scale, rotate, and place the 3d model so its features align with the features in the image. “this is easy if the feature-detection algorithm has done its job perfectly, but that’s rarely the case,” carlone says. in real applications, the robot faces many outliers—mislabeled features—which can make up more than 90% of all observations. that’s where the gnc algorithm comes in and outperforms all competitors.
robots solve this problem using a mathematical function that takes into account the distance between each pair of features—for instance, the right headlight in the image and the right headlight in the model. they try to “optimize” this function—to orient the model so as to minimize all of those distances. the more features, the more difficult the problem.
one way to solve the problem would be to try all the possible solutions to the function and see which one works best, but there are too many to try. a more common method, yang and antonante explain, “is to try one solution and keep nudging—making, say, the headlights in the model more aligned with the headlights in the 2d image—until you can’t improve it anymore.” given noisy data, it won’t be perfect—maybe the headlights align but the wheels don’t—so you can start over with another solution and refine that one as much as possible, repeating the process several times to find the best outcome. still, the chances of finding the best possible solution are slim.
in real applications, the robot faces many outliers, which can make up more than 90% of all observations. that’s where the gnc algorithm comes in and outperforms all competitors.
the idea behind gnc is to first simplify the problem. they reduce the function they’re trying to optimize—the one describing the differences between the 3d model and the 2d image—to one with a single best solution. now when they pick a solution and nudge it, they’ll eventually find that best solution. then they reintroduce a bit of the original function’s complexity and refine the solution they’ve just found. they keep doing this until they have the original function and its optimal solution. the headlights are well aligned, and so are the wheels and bumpers.
going in circles
the paper applies the gnc algorithm to shape alignment and to slam, among other problems. in the case of slam, the robot uses sensor data to figure out its past trajectory and build a map. for example, a robot roaming around a college campus collects odometry data suggesting how far and in which direction it’s gone between 8:00 a.m. and 8:15 a.m., between 8:15 a.m. and 8:30 a.m., and so on. it also has lidar and camera data at 8:00 a.m., 8:15 a.m., and so on. occasionally, it will complete loops, seeing the same thing at two different times, like tzoumas did when he ran past the 7-eleven again.
the researchers found that the gnc algorithm was more accurate than the state-of-the-art techniques and could handle a higher percentage of outliers.
just as in shape alignment, there’s an optimization problem to be solved. yang, who is the first author on the paper, explains: “for slam, instead of lining up features to match a 3d model, the system bends the trajectory it thinks it traversed in order to align objects on the map.” first, the system works to minimize the differences between the perceived journeys by different sensors, since every sensor is likely to have errors in measurements. for example, if the robot’s odometer shows it traveled 100 meters between 8:00 a.m. and 8:15 a.m., the trajectory updated based on lidar and camera measurements should reflect that distance, or something close to it. the system also minimizes the distances between locations that appear to be the same place. if the robot saw the same 7-eleven at 8:00 a.m. and 10:00 a.m., the algorithm will try to bend the recalled trajectory—adjusting each leg—so that its recalled positions at times 8:00 a.m. and 10:00 a.m. align, closing the loop.
meanwhile, the algorithm identifies and discards outliers—bad data points, where it thought it was retracing its steps, but it wasn’t—just as mislabeled features in shape alignment. you don’t want to falsely close a loop. tzoumas recalls a time, running through the woods in maine, when he ran past a collection of fallen tree trunks that looked familiar. he thought he’d closed the loop, and using this supposed landmark, he took a turn. only after not seeing anything else familiar for 20 minutes did he suspect his mistake and turn back.
a recalled trajectory before optimization might look like a tangled ball of twine. after untangling, it resembles a set of right-angled lines mirroring the shape of the campus pathways and hallways that the robot traversed. the technical term for this slam process is pose graph optimization.
in the paper, the researchers compared their gnc algorithm with other algorithms on several applications, including shape alignment and pose graph optimization. they found their method was more accurate than the state-of-the-art techniques and could handle a higher percentage of outliers. for slam, it worked even if three in four loop closures were mistaken, which is many more outliers than it would encounter in real-world application. what’s more, their method is often more efficient than other algorithms, requiring fewer computational steps. tzoumas says, “one of the difficulties was finding a general-purpose algorithm that works well across many applications.” yang says they’ve tried it on more than 10. in the end, tzoumas says, they found “the sweet spot.”
going from research to production is an important step for research outcomes to make a difference at scale, says roberto g. valenti, a robotics research scientist at mathworks. mathworks has been working with carlone’s lab to integrate the gnc algorithms into matlab as part of navigation toolbox™, which companies use to implement slam on commercial and industrial autonomous systems.
out of the woods
carlone’s lab is working on ways to extend the capabilities of their gnc algorithm. for example, yang aims to design perception algorithms that can be certified to be correct. and antonante is finding ways to manage inconsistency across different algorithms: if the slam module in an autonomous vehicle says the road goes straight, but the lane-detection module says it bends right, you have a problem.
the gnc algorithm is the new benchmark in allowing robots to catch their own mistakes.
tzoumas is looking at how to scale up not just to interaction between multiple algorithms in one robot, but collaboration between multiple robots. in earlier work, he programmed flying drones to track targets, such as criminals trying to escape on foot or by car. going forward, multiple machines could perhaps run the gnc algorithm collectively. each would contribute partial information to its neighbors, and together they would build a global map—of locations on earth or elsewhere. this year he’s moving to the aerospace engineering department at the university of michigan to work on trustworthy autonomy for multi-robot planning and self-navigation—even in difficult environments, such as battlefields and other planets.
“not knowing how ai and perception algorithms will behave is a huge deterrent for using them,” antonante says. he notes that robotic museum guides won’t be trusted if there’s a chance they’ll crash into visitors or the mona lisa: “you want your system to have a deep understanding of both its environment and itself, so it can catch its own mistakes.” the gnc algorithm is the new benchmark in allowing robots to catch their own mistakes, and, most importantly, as tzoumas says, “it helps you get out of the woods.”
other feature stories on robotics
read other stories
academia / ai
ai unveils the secrets of ancient artifacts
using deep learning and image processing to restore and preserve artwork
green technology / control systems
removing millions of tons of co2 emissions at seaports each year
electrifying commercial vehicles with hydrogen fuel cells
stem / academia
building a future in stem
high school student discovers there is more to coding than a computer screen