Machine learning and artificial intelligence systems have significantly advanced in recent years. However, they are currently limited to executing only those tasks they are specifically designed to perform and are unable to adapt when encountering situations outside their programming or training. DARPA's Lifelong Learning Machines (L2M) program, drawing inspiration from biological systems, seeks to develop fundamentally new ML approaches that allow systems to adapt continually to new circumstances without forgetting previous learning.
First announced in 2017, DARPA's L2M program has selected the research teams who will work under its two technical areas. The first technical area focuses on the development of complete systems and their components, and the second will explore learning mechanisms in biological organisms with the goal of translating them into computational processes. Discoveries in both technical areas are expected to generate new methodologies that will allow AI systems to learn and improve during tasks, apply previous skills and knowledge to new situations, incorporate innate system limits, and enhance safety in automated assignments.
The L2M research teams are now focusing their diverse expertise on understanding how a computational system can adapt to new circumstances in real time and without losing its previous knowledge. One group, the team at University of California, Irvine, plans to study the dual memory architecture of the hippocampus and cortex. The team seeks to create an ML system capable of predicting potential outcomes by comparing inputs to existing memories, which should allow the system to become more adaptable while retaining previous learnings. The Tufts University team is examining a regeneration mechanism observed in animals like salamanders to create flexible robots that are capable of altering their structure and function on the fly to adapt to changes in their environment. Adapting methods from biological memory reconsolidation, a team from University of Wyoming will work on developing a computational system that uses context to identify appropriate modular memories that can be reassembled with new sensory input to rapidly form behaviors to suit novel circumstances.
"With the L2M program, we are not looking for incremental improvements in state-of-the-art AI and neural networks, but rather paradigm-changing approaches to machine learning that will enable systems to continuously improve based on experience," says Hava Siegelmann, the program manager leading L2M. "Teams selected to take on this novel research are comprised of a cross-section of some of the world's top researchers in a variety of scientific disciplines, and their approaches are equally diverse."
While still in its early stages, the L2M program has already seen results from a team led by Hod Lipson at Columbia University's Engineering School. Lipson and his team recently identified and solved challenges associated with building and training a self-reproducing neural network, describing their findings in "Neural Network Quine," publishing in Arxiv Sanity. While neural networks are trainable to produce almost any kind of pattern, training a network to reproduce its own structure is paradoxically difficult. As the network learns, it changes, and therefore the goal continuously shifts. The continued efforts of the team will focus on developing a system that can adapt and improve by using knowledge of its own structure. "The research team's work with self-replicating neural networks is just one of many possible approaches that will lead to breakthroughs in lifelong learning," says Siegelmann.
"We are on the threshold of a major jump in AI technology," Siegelmann says. "The L2M program will require significantly more ingenuity and effort than incremental changes to current systems. L2M seeks to enable AI systems to learn from experience and become smarter, safer, and more reliable than existing AI."
No entries found