Skip to main content Skip to secondary navigation
Page Content
Image
Two farmers stand in a field with a drone flying above them

Futurists imagine skies someday buzzing with autonomous drones that use artificial intelligence algorithms to monitor traffic, deliver packages, and keep tabs on the world in myriad other ways. Lacking the energy or computational horsepower for the intense mathematics required to analyze all that information, many drones can send visual data to massive central servers that crunch the numbers and beam back results when they are uncertain what they are seeing. Unfortunately, transferring this data takes precious time and bandwidth that can slow or even stop a drone dead in its tracks.

Writing in a paper appearing in the journal Autonomous Robots, researchers at Stanford University introduce a new algorithm that can help these drones—and other similar low-power robots whether airborne, underwater, or on the ground—decide when and when not to off-load their AI tasks. In simulations and experiments their strategy improved performance in key vision tasks considerably—in some cases by almost two-and-a-half times the benchmarks. In the end, that means safer robots and autonomous vehicles able to make key decisions in quick order.

Read the study: Network Offloading Policies for Cloud Robotics: A Learning-based Approach

 

“It’s a win-win,” said Sandeep Chinchali, a new assistant professor at the University of Texas at Austin, who led the research while completing his PhD at Stanford. “The drones can significantly improve their sensing accuracy while minimizing the costs of cloud communication.”

What Data to Off-Load

In such autonomous applications, there are two types of visual data analysis. One occurs in real time and helps a drone navigate and avoid accidents. The second, called continual learning, helps the drone improve its recognition skills so it can learn when confronted with new or confusing information. This second analysis is more computationally intensive and may require human intervention to annotate imagery or identify objects and actions new to the robot

The research team focused on the second visual data type, as the off-loading is necessary for human review. In this regard, researchers recognized that drones should choose to off-load data only when absolutely necessary or particularly beneficial, but making that call is not always easy. Therefore, the team built an algorithm that takes into account network conditions, such as available bandwidth, the amount of data to be transferred, as well as the relative novelty of the information gathered, to help the robots make these key off-loading decisions.

They worked toward making this off-loading efficient. Ideally, they say, a robot should only need to upload 1 percent of its visual data to help retrain its model each day. Given that Intel has estimated that a self-driving car can generate more than four terabytes of data in 90 minutes of driving, even 1 percent of that total—40 gigabytes from a single robot—is a lot of data to transfer, annotate, and re-train upon. The challenge is only expected to grow as the number of autonomous vehicles proliferates.

“Imagine the challenge for a robot on Mars that can only send data back to Earth at 5 megabytes per second. That’s about three-and-a-half minutes per gigabyte,” he offered, putting the long-term challenge in perspective. At that rate it could take hours to send back a few minutes of video.

“Time Is the Currency”

In earlier approaches to the problem, researchers have under-emphasized the time-costs of data transfer in their algorithms, noted Marco Pavone, professor of aeronautics and astronautics and senior author of the paper.

By employing deep reinforcement learning algorithms, the research team was able to weigh the demands of varied and rapidly changing network conditions and the speed and accuracy trade-offs between on-robot and in-cloud calculations to arrive at an optimal solution. The new strategy allows robots to smartly and sparingly tap into the cloud to improve perception while reducing demands on data channels.

“You can view things as an economics challenge,” said Chinchali. “Time is the currency. Accuracy is the goal. The total cost is the cumulative network round-trip time to transmit the data and compute time to process it.”

"Today's robots are increasingly turning to compute-and-power-intensive perception and control models, such as deep neural networks,” Pavone said. “Moreover, they are measuring terabytes of rich sensory data. Our algorithms for cloud robotics allow us to scale the adoption of cheap, low-power, yet intelligent robots that selectively augment their real-time inference and continual learning using the cloud. The key idea is economic and decision-theoretic—we allow robots to gracefully trade off cloud accuracy with systems costs."

The researchers believe it is the first work of its kind that formulates the cloud off-loading problem as a sequential decision-making problem under uncertainty. Next up, the researchers say, there are many theoretical and practical avenues to pursue to optimize these processes. In fact, this paper is only the latest in a series of papers (including work on collaborative perception, collaborative learning and task-driven video streaming) for Chinchali and colleagues exploring the possibilities of AI in the field. Their algorithms will allow groups of robots to share perception and control models both locally and in the cloud, while efficiently communicating only task-relevant data to improve speed and reliability.

The weight of these questions is only going to intensify as visually aware drones, autonomous cars, and robots become more commonplace, Chinchali said, adding, “Algorithms like ours allow robots to flexibly balance the trade-offs in accuracy and communication costs to continually improve robotic perception for that future.”

Stanford HAI's mission is to advance AI research, education, policy and practice to improve the human condition. Learn more

More News Topics

Related Content