A team of firefighters clears a building in a blazing inferno, searching rooms for people trapped inside or hotspots that must be extinguished. Except this isn't your typical crew. Most apparent is the fact that the firefighters aren't all human. They're working side-by-side with artificially intelligent (AI) robots who are searching the most dangerous rooms, and making life or death decisions.
This scenario is potentially close. While AI-equipped robots might be technologically capable of rendering aid, sensing danger, or providing protection for their flesh-and-blood counterparts, the only way they can be valuable to humans is if their operators aren't burdened with the task of guiding them.
A team of researchers at Lawrence Livermore National Laboratory (LLNL) is responding to the need by investing in "collaborative autonomy," a broad term describing a network of humans and autonomous machine partners interacting and sharing information and tasks efficiently and in such a way that it doesn't distract the human operator.
"The idea with collaborative autonomy is not the human flying the drone, it's the human in control in the sense of guiding the mission or the task," says LLNL engineer Reg Beer, who is heading the Lab's collaborative autonomy effort. "The goal is to employ robotic partners with the ability to direct an autonomous squad-mate and have that squad-mate go achieve something without having to be teleoperated or with intense oversight."
Reaching that level of human-machine cooperation requires trust, Beer says—the confidence that machines will not only perform their assigned tasks, and not go off-script, but will be able to report back that they're not functioning properly or that their environment has changed too much to properly gauge.
"We want a machine-based system that is not an unexplainable artificial intelligence system, because we won't as a society trust something we can't understand," Beer says. "Humans have to see a reason and a logic to things, and if the machine seems unpredictable to us, if we don't understand why it's making its decisions, it won't be adopted. We want to trust that it's going to perform and function optimally and if it makes a mistake, we're going to have a basis to explain the mistake."
Beer and other LLNL researchers are working to create a coordinated and distributed smart network of "nodes" or machines with AI capability, which could be applied to any type of autonomous vehicle, drone, or robot that might need to network and perform detection missions. The Lab has begun two Laboratory Directed Research and Development programs initiated by the Engineering Directorate—one exploring a decentralized network, where intelligence and sensor data is shared among machines, and another looking at a belief network, where each machine or node can calculate the probability of detection based on observations.
"The ultimate objective of these efforts is to develop the algorithms and the computing capabilities that enable an adaptive network of mobile and autonomous platforms that collaborate in real time to construct an actionable 'picture' of the operating environment," says LLNL Associate Director for Engineering Anantha Krishnan. "The realization of this objective requires bringing together LLNL's world-class leadership in sensors and algorithms, machine intelligence, networking, and high performance computing."
For a situation such as detecting an explosive device on a battlefield without putting humans in undue harm, ideally, if the best positioned drone or sensor were destroyed or compromised, the others would be able to rearrange themselves and fill in the gaps in information. A project led by LLNL researcher Ryan Goldhahn is looking at the mathematical framework for processing data over large sensor networks, so the AI machines can figure out what to sense and how they should communicate with the rest of the network, in a way that doesn't rely on one central node.
Goldhahn is investigating a decentralized architecture, where decision-making is pushed to the individual nodes. Each one senses something slightly differently, so while one AI machine might not have all the answers, by sharing data amongst themselves, they can collectively come to a better decision.
"We don't want nodes to be 'dumb' sensors that go out and collect data, and send their data to a fusion center that interprets all the data, and directs the network over the next timestep," Goldhahn says. "The idea is for the individual nodes themselves to judiciously choose what data to sense and what they should send to other nodes—you don't want your network to rely on one node, which is a potential vulnerability. If the intelligence is pushed out to the individual nodes, and each of those nodes is making a local decision, then if one node gets destroyed, the others can compensate, and the performance of the entire network degrades gracefully."
Due to bandwidth issues, Goldhahn says, sensors or nodes can't always communicate all the data they collect, and should share only those components needed to reach a decision. Additionally, if all the data has to be sent to a central node and then re-transmitted, it could lead to bottlenecks. Further complicating matters is that most nodes are low-powered (on the order of a cellphone, for example) and many current solutions only apply to a small number of simple nodes or a single vehicle. If a large number of nodes are added, most current solutions aren't practical.
"Autonomous sensor networks are only as flexible as how many choices you give them for the next step," Goldhahn says. "As you add more sensors or make nodes more flexible, the problem complexity grows exponentially. You can't guarantee that solvable approximations like greedy solutions, where each node does what's best for itself, will converge to something that's globally optimal. That's really the challenge."
Goldhahn says a decentralized network could be applied to any kind of sensor, which could apply to a broad range of problems at the Lab, including detecting improvised explosives or radiation devices. Within a year, Goldhahn says he would like to show that the performance of the new framework for huge networks of autonomous vehicles can be significantly more effective than current approaches.
Picture a battlefield with a fleet of autonomous drones searching for a tank. The drone in the front clearly sees the tank and is deemed to be an "expert," but because it's closest to the target, it gets destroyed. The other drones don't see the tank quite so clearly, so their decision on what to do next is clouded. Now what?
While it might make sense to combine the data the rest of the drones have sensed to make the next collective decision, the answer isn't as linear as you might think, says LLNL researcher Gerald Friedland, who is heading the Laboratory Directed R&D project looking at belief networks.
The sum data the other nodes have might actually be worse than the one expert, so it might make more sense, Friedland says, "to save whatever decision the expert had, and for the rest of the drones to believe the information coming from the expert, because their cumulative guesses are going to be so much worse than the cached version of that expert."
Friedland has discovered the importance of experts through generalized machine learning, which he's used and found promising for tracing the origin of videos from metadata embedded in them. With video location, he says, some nodes have shown to consistently make better educated guesses about where the videos originated.
"Sometimes you have initial guesses that are really good and you have some nodes that are really experts because they hit the jackpot and had the right answer," Friedland says. "What we see is that you've got to listen to the experts. Some nodes just have the right idea. The question is how do we identify these experts, and say these are the machines we need to listen to? It's this sort of expert selection that we need."
Unlike human or animal society, which has standardized ways of determining experts (think university degrees, or a dominant alpha male in a wolfpack), there aren't any strategies that exist to identify expert machines, Friedland says. He is experimenting with methods to determine experts and ways to "teach" the other machines to follow them. By the project's end, his goal is to be able to present rules and laws for how drones communicate and ultimately implement them into hardware.
When it comes to the hardware in which these networks might be implemented, Beer says researchers are "platform agnostic." The idea, he says, is developing a system that could work with any machine capable of intelligently sensing and/or detecting, such as ground-based robots, remote controlled vehicles, or aerial drones.
To test their algorithms, Beer's group is putting algorithms into large network simulators capable of visualizing data from thousands of nodes. In one example, they recently published a conference paper using intelligent collaborative sensors to locate a radioactive source. Because the focus of the collaborative autonomy program is general purpose, Beer says, the work could prove beneficial to numerous areas at the Lab, including data science, high-performance computing on mobile or embedded processes, and sensor integration.
Belief networks, he says, could be used for anything that requires a machine learning approach to decision making and classifying, such as energy grid security, medical or lifesaving applications, and nuclear materials.
"We already handle the eyes and ears of a lot of autonomous sensing and now we're getting into more of the algorithms that make up the AI presence for those [applications]," Beer says. "It's not like we're having to build from scratch. A lot of those capabilities are already here at the Lab."
No entries found