acm-header
Sign In

Communications of the ACM

Viewpoint

Fragility in AIs Using Artificial Neural Networks


line-connected colored dots, illustration

Credit: RamCreative

Artificial neural networks (ANNs) are a promising technology for supporting decision making and autonomous behavior. However, most current-day ANNs suffer from a fault that hinders them from achieving their promise: fragility. Fragile AIs can easily make faulty recommendations and decisions, and even execute faulty behavior, so reducing ANN fragility is highly desirable. In this Viewpoint, we describe issues involved in resolving ANN fragility and possible ways to do so. Our analysis is based on what is known about natural neural networks (NNNs), that is, animal nervous systems, as well as what is known about ANNs.

Back to Top

What Is Fragility?

Engineered systems fall on a continuum from robust to fragile. One reason some ANNs are considered "fragile" is that seemingly minor changes in the data they are given can cause major shifts in how they classify the data. Such fragility is often evident when lab-trained ANNs are finally tested under real-world conditions. A classic example of hyper-fragility is an ANN that, after being trained to recognize images of stop signs, fails to recognize ones in which a small percentage of pixels has been altered.

Inputs: Passively received vs. actively gathered. First, it must be recognized that fragility is not unique to ANNs. NNNs, for example, animal brains, can also be confused by minor alterations of inputs. All animals can be tricked by well-crafted or naturally arising "lures." Synthesized optical illusions are a prime example. In Figure 1, minor shifts in the relative positions of light and dark areas within two nearby bands cause the human visual system to see either spirals (left) or concentric circles (right).

f1.jpg
Figure 1. Concentric circles appear as spirals, or not, depending on the orientation of every 4th small rectangle.

However, unlike most ANNs, human sensory systems ride on mobile segments of an actor. Shown the illusion in Figure 1, many people vary their angle of regard and scan pattern, and some may trace a finger along one of the "spirals," and thereby discover they have two unlinked circles, not a spiral. Similarly, people and other animals often do "double-takes" when uncertain or when they perceive something novel or unexpected given a context. In so doing, they are actively gathering additional data to compensate for their visual system's fragility.

Stated more generally, when constrained to be completely passive, NNNs can be fragile, but when they output a categorization with a low confidence level, the containing executive system typically attempts to gather more data. In contrast, most ANNs are implemented as purely passive receivers of data, with no containing executive, so we should not be surprised they often exhibit fragility. This suggests one powerful way to reduce fragility in ANNs: wrap them in executive systems that trigger active ("Gibsonian") data gathering8 under specified conditions.

More sophisticated systems would detect more of these types of conditions. Perhaps the most general, and easiest to exploit, are confidence levels of the ANN's classifications: The executive can use low confidence levels to trigger second takes. Because ANNs are based on numerical scoring, it is straightforward to add a confidence metric and a threshold level, below which the AI would engage active information gathering. In other words, to reduce fragility, AIs need at least the ability to measure the degree of certainty of their perceptual decisions and, when certainty is low, do double-takes, for example, activate usually immobile or dormant sensors, or secondary data sources. Nonfragile AIs, by design, will withhold action whenever uncertainty cannot be resolved.

Confidence levels can be measured with purely feedforward ANNs, by computing an answer to this question: How strong, relative to correct past classification episodes, is the current evidence for applying this label? But more robustness could be achieved in ANNs augmented by feedback, which could also compute a certainty-relevant metric to answer the inverse question: Given this provisional label, how well does the current input dataset match the typical input that activated this label on successful past classifications?


Engineered systems fall on a continuum from robust to fragile.


Autonomous driver AIs or aerial drone AIs are inherently equipped to improve greatly upon passive information processing of passive data streams from their sensor suites, because they move. Movement creates image sequences, providing rich information for disambiguation, but further robustness requires making the movements responsive to the confidence/uncertainty metrics produced within the ANNs.

Note also that the uncertainty threshold for engaging double-takes can be adaptive. It can be reduced in contexts where lures are unlikely, but increased in contexts where lures are a common threat to accurate classifications. This feature of an AI would allow it to mimic the flexible way that humans trade off speed (little or no pausing for double takes) and accuracy (ample time devoted to active information gathering). There is now impressive evidence that uncertainty causes humans to switch from a fast habitual mode of decision making to a slower, more deliberative mode.1 This is a hallmark of executive control, and nonfragile AIs must mimic humans in this regard.

Single NN vs. multiple diverse NNs. A second cause of ANN fragility is that most consist of a single network comprising input (sensor-driven) neurons, inner (hidden) layers of neurons, and output neurons (indicators or effectors). Even if an ANN is deep in the sense that it has several inner layers of neurons, it is nonetheless a single ANN with a uniform structure and uniformly applied methods for adapting connections between neurons and for processing signal streams. For example, assume we have trained a "deep learning" ANN to recognize dog breeds depicted in photos. That ANN may be fragile, that is, it can possibly be confused by shadows, partial occlusion, alteration of a few carefully chosen pixels, or other minor fluctuations in the input.

In contrast, real animal (including human) brains and perceptual systems are not a single N; they consist of many interconnected NNs that differ widely in anatomy and physiology, that is, they may have different neural structures and use distinct learning processes affected by different combinations of modulating signals, for example, transient neurotransmitter release events triggered by prediction errors or uncertainty. These different neural networks were initially added to the nervous system by independent mutation-selection cycles during different evolutionary periods, and have been further refined in descendant lineages. They may operate in parallel or serially.

For example, anatomical and physiological studies indicate that primate visual systems are highly modular, with several distinct NNs processing the same visual inputs differently and combining their outputs to create the phenomenon of vision9,10 and even blind-sight (the ability to utilize visible information with no subjective awareness of seeing). For example, thalamus, superior colliculus, and a dozen visual cortices all mediate perceptual decisions driven by visual inputs. Similarly, in humans and other mammals, the hippocampus, amygdala, and cortex—quite different brain structures—are all capable of mediating memory retrieval,2,3 and memory-guided decisions.

Animals' cognitive architecture reduces fragility by consisting of multiple very different NNs evaluating the same stimulus, with each coming to a decision, plus mechanisms for integrating the outputs from those various NNs to produce an overall decision. Thus, outputs from distinct NNs can cooperate, but also compete, to produce a final decision or behavior. The fragility of the combined identification is reduced via weighted voting by non-equivalent decision agents. As in the colloquial expression, "two brains are better than one," it is also true that one brain with all anatomically distinct information processing streams intact is much better than a brain reduced to one homogeneous NN. This relates back to the active perception theme: binocular vision reduces visual ambiguity, relative to monocular vision, and an animal reduced by injury to monocular vision can compensate by moving its angle of regard, then reaching a decision based on an aggregation of multiple, spatially disparate, images.8

Some component NNs in a perceptual system produce results faster than others, which may render decisions by the slower component NNs moot. For example, if a large object comes flying at your head, some NNs in your brain will cause you to flinch, duck, or throw up your hands to block it, long before other NNs in your brain identify the object.3,5


AIs need at least the ability to measure the degree of certainty of their perceptual decisions and, when certainty is low, do double-takes.


Consider the following real example of human perceptual fragility. In spring 2020, there were demonstrations across the U.S. against police killings of Black people. Most of the demonstrations were peaceful, but some were not. Many people posted photos and videos of the demonstrations on social media. One such photo (see Figure 2) shows people walking on a street. The person who posted the photo expressed horror that the man on the right was carrying an assault rifle, and asked why police—who were present—did not arrest him, especially since there had been shootings at other recent demonstrations. Several commenters—some of whom were at that demonstration—pointed out it was a rainy day (note the open umbrellas in the background) and suggested that what the man was carrying was just a closed umbrella. The original poster responded: "Yes, of course! Duh!" and wondered why she had perceived a gun. She initially saw a gun because recent news had trained one of her NNs to expect guns at these demonstrations. Other people were less certain and withheld decision until affected by slower NNs that took the context—the weather and the fact that other protesters and the police were not reacting as one would expect to the presence of someone carrying an assault rifle—into account, suppressing the alternative "take" that the black elongated shape was a rifle.

f2.jpg
Figure 2. Image of demonstration posted on social media (anonymized).

In a decision system comprising multiple component NNs, the relationship between the components—how they interact to produce an overall decision—is very important, and can be flexible. For example, the relative decision speeds of diverse component NNs influence how they interact: faster NNs would normally "win." Similarly, if the overall system weights the outputs of its component NNs differently, that would also influence the outcome. But humans and other animals can learn to exercise "executive control" in which they context-dependently inhibit the faster processes if they repeatedly produce faulty results in a context. This ability is highlighted in folk wisdom: "Fool me once, shame on you; fool me twice, shame on me."

Perceptual classification vs. proof in practice. The point just made leads to a much broader conclusion. An underappreciated source of fragility is that in typical ANNs, the loss function pertains only to whether the inputs are being used to make the best assignment of the current stimulus to classes/labels. In the real brain, there is an additional step involving reinforcement learning (RL): the final arbiter is whether a classification can effectively guide behavior that leads to reward.4,6 Earlier, we emphasized classification uncertainty, because the best match may not be a good enough match. But there are really two senses of good match. The sense already highlighted is purely cognitive: Is the current exemplar similar enough to the template it best matches to warrant a high expectation that it is a true member of the category?

A second sense is purely practical: Does the behavior triggered by making this category assignment actually pay off? To exemplify, imagine a shop keeper presented with a $50 bill from customer X. He cannot see any way that it departs from his mental template of a genuine $50 bill: His perceptual assessment yields high confidence. But after he deposits it at his bank, he is notified that it was rejected as counterfeit. Via reinforcement learning, repeated episodes of this type will cause the shopkeeper to stop trusting his visual assessment of $50 bills presented to him by customer X. In this way, RL teaches an actor that the best its cognitive system can do, in a particular context, is not good enough to warrant a given behavior.


There are really two senses of good match.


This point dovetails with the need for multiple ANNs that make independent contributions. In the human visual system and hippocampal memory system, separate streams exist to represent spatial variables/contexts versus the cues/objects that occupy spatially segregated contexts.7 Because the behavioral implications of cues and objects are so context-dependent, such separate streams are vital for implementing the RL strategy just noted.

The foregoing analysis suggests further practical ways to avoid fragility in ANNs:

  • Combine several ANNs with very different structures, operating characteristics, speeds, and internal connection-weights.
  • One plausible group-structure is cascading systems in which inexpensive, fast systems make rough assessments of the data and slower, more precise systems make successively more refined assessments, until at some point the whole system issues a response.
  • Include an input ANN that feeds all the decision ANNs the same inputs.
  • Include an output executive that assigns varied weights to the outputs of the component ANNs and combines their results.

This strategy can be seen as following the principle that group decisions by voting are of higher quality, but only if the voters composing the group are diverse. The diversity is as important as the executive control strategy. Ideally, the distinct NNs should have uncorrelated failure modes. Mere ANN partial cloning, for example, training three ANNs of the same type from different initializing weight matrices, would increase "reliability through redundancy," but cannot greatly improve decision quality.

Back to Top

Conclusion

We have argued that fragility in ANNs can be reduced by three related means:

  • Wrap them in executive systems that monitor ANN confidence levels and, when confidence is low, trigger processes to actively seek more information.
  • Modularize ANN-based decision systems to consist of multiple ANNs with different structures and operating characteristics, with executive systems to combine the outputs of the component ANNs.
  • Recognize the best classification of a perceptual subsystem may not be good enough to guide behavior in some contexts, and include a RL-trained ANN stage to provide a context-sensitive capacity to reversibly enable or disable each classification as a trigger for specific behaviors.

Some researchers are starting to use some of these methods to reduce fragility, but until these measures become widely adopted outside of academia, fragility in ANNs will continue to be a problem.

Back to Top

References

1. Daw, N.D. et al. Uncertainty-based competition between prefrontal and dorsolateral striatal systems for behavioral control. Nature Neuroscience 8 (2005), 1704–1711.

2. Eagleman, D. Incognito: The Secret Lives of the Brain. Vintage Books, New York (2012).

3. Eagleman, D. The Brain: The Story of You. Vintage Press, New York (2015).

4. John, Y.J. et al. Anatomy and computational modeling of networks underlying cognitive-emotional interactions. Frontiers in Human Neuroscience 7, 101 (2013).

5. Kahneman, D. Thinking Fast and Slow. Farrar Straus and Giroux, New York (2011).

6. Patrick, S. and Bullock, D. Graded striatal learning parameters enable switches between goal-directed and habitual modes by reassigning behavior control to the fastest-computed reward predictive representation. bioRxiv, (2019); https://doi.org/10.1101/619445.

7. Ritchey, M. et al. Dissociable medial temporal pathways for encoding emotional item and context information. Neuropsychologia 124 (2019), 66–78.

8. Rucci, M. et al. Integrating robotics and neuroscience: Brains for robots, bodies for brains. Advanced Robotics 21 (2007), 1115–1129.

9. SfN/BrainFacts Vision: Processing Information, Brain Facts, Society for Neuroscience. (2012); https://bit.ly/3BrbMM0

10. Van Essen, D.C. Information Processing in the Primate Visual System, Advances in the Modularity of Vision: Selections from a Symposium on Frontiers of Visual Science, Washington, D.C., National Academies Press, (1990); https://bit.ly/3pK1DHT

Back to Top

Authors

Jeff A. Johnson ([email protected]) is a retired assistant professor in the computer science department at the University of San Francisco, CA, USA.

Daniel H. Bullock ([email protected]) is Professor Emeritus in the Department of Psychological and Brain Sciences, Boston University, MA, USA.


Copyright held by authors.
Request permission to (re)publish from the owner/author

The Digital Library is published by the Association for Computing Machinery. Copyright © 2023 ACM, Inc.

 


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account
Article Contents: