Trojans win student design competition with autonomous robots that “learn” – USC Viterbi


The students’ research imagines a future where rovers could work as a team on distant planets far from home. Photo/iStock.

What if a team of rovers could roam the landscape of an alien planet in search of water or signs of life completely autonomously, millions of miles from Earth?

Award-winning research by four USC Viterbi students imagines the near future where this is possible, using a promising technology called edge computing. This method, which forgoes centralized servers, allows autonomous robots to process everything they see or do on the spot.

The team’s entry won first place at 2n/a Student Design Competition on Networking Computing on the Edge, an international competition held in conjunction with CPS-IoT Week in May, which focused on potential future applications of edge computing. They demonstrated how a graphical convolutional neural network, or GCN, could increase the computational efficiency of robots that don’t use a central server. In other words, the robot can solve problems or figure out what to do at a much faster rate by “learning” from previous calculations and working with other robots nearby.

“As these robots perform their tasks, they collaborate with each other to perform all the necessary calculations,” said Bhaskar Krishnamachari, professor of electrical and computer engineering and the team’s academic advisor. “The project focuses on how these robots can best allocate their resources to work together when performing complex calculations.”

Learning on the sidelines

The group met in the fall of 2021, when Krishnamachari brought together an enthusiastic undergraduate student, Daniel D’Souza, and three of his doctoral students. students – Mehrdad Kiamari, Lillian Clark and Jared Coleman – to tackle a particular facet of computing called task scheduling.

Much like a human, an autonomous robot can break larger tasks down into a series of smaller steps, or task schedule, to determine what they should prioritize or do first. However, many of the functions of these robots, such as exploration or military reconnaissance, are too complicated to operate in a step-by-step process. So the team came up with a “task graph,” where tasks branch into a network.

“For real-world applications like genome sequencing, there are many thousands of tasks to perform,” explained Kiamari, who holds a Ph.D. electrical engineering student.

To reduce the time it takes for a robot to figure out what to do when faced with an extremely complex set of tasks, Krishnamachari and Kiamari developed the GCNScheduler, software that uses a neural network to “learn” how to schedule tasks. faster.

This increased planning speed is ideal for the more advanced functions of an interstellar drone, such as analyzing images or mapping a path, according to Clark, a Ph.D. student in electrical engineering.

“The planner gets the most out of graph structures, like in the case of image detection,” Clark said. “It’s really useful if you’re doing something like looking for signs of life or melting water on another planet.”

Put to a test

The team demonstrated the effects of the scheduler with four mobile “Turtlebots”, small rover-like robots that can operate with a simple Raspberry Pi interface. With GCNScheduler software installed, a bot would stop scheduling tasks after being out of of communications range, which at this scale is in the hallway.

Signal strength between robot “nodes” fluctuates in real-world scenarios, and the experiment was designed to show how the GCNScheduler would respond. This adaptation is essential for robots programmed to perform more sophisticated actions, such as surveillance or facial recognition, where they may be constantly in motion or need to perform extremely complicated functions.

Scheduler applications have potential in a myriad of industries as autonomous robots take on more responsibilities, making the choice to combine GCNScheduler with edge computing an obvious one, according to D’Souza, a computer engineering and science student. computer science.

“Autonomous robots are going to be used to perform security facial recognition, baggage handling services, crop coordination, military reconnaissance,” D’Souza said. “They’re popping up in all these different areas, and they’re going to be used more and more in the real world.”

Posted on August 16, 2022

Last updated August 16, 2022

Source link


Comments are closed.