Attributed to Juvenal,a this title phrase translates to "Who will watch the watchers?" In the 21st century, we may well ask this question as we invest increasingly in machine learning methods, platforms, applications, and designs. Nowhere is this more evident as we see increased excitement and investment in artificial intelligence (AI) for military application. In some ways, this is an old story. Early computers were used to improve the calculation of ballistics settings and, with the invention of radar, automatic fire-control systems became important parts of offensive and defensive systems. That an international "AI race" is under way is incontrovertible. That is not the whole story, of course. Machine learning methods have yielded stunning scientific results, such as the computed folding of some 200M proteins recently announced by Deep Mind.b Natural language recognition, speech synthesis, conversational chatbots, "deep fakes," and myriad other applications are emerging from the deep learning tools that have already been developed.
Among the more intriguing developments is the notion of "generative adversarial networks," in which one neural network attempts to fool another into recognizing something that it isn't.c The process can be used to reinforce the correct operation of a discriminating (for example, recognition) neural network. Of course, the concept can also be used to defeat successful recognition. Small modifications to automobile markings or sign markings can defeat successful recognition (for example, camouflage). These examples sometimes illustrate the difference between human and machine image recognition. Of course, there are also ample examples that show how human vision can be tricked and confused—Maurice Escher drawings come immediately to mind.d
What does all this have to do with Juvenal's phrase? The more I think about the potential brittleness of neural networks and recognition or decision making, the more I wonder how we will be able to tell when a neural-network choice or decision is incorrect? There is an increasing number of examples of the use of neural networks for facial recognition or loan application evaluation or parole decisions. In many of these cases, the statistics of the so-called training sets used to develop the weights of the "neurons" in the neural network are biased by flawed decisions by humans reflected in the data. In effect, the training set biases the results when applied to new applications.
In some cases, these flaws might result in serious harm. Mistakes by self-driving cars, attacks against friendly forces, incorrect diagnoses of cancer, and unfair decisions that have negative consequences are all drawn from real-world examples that should give us pause. Of course, humans have also been known to make these same mistakes. The question this raises in my mind is whether it is possible to train independent neural networks that might warn of mistakes like this before they can cause harm. The generative adversarial network concept sounds as if it might have a role to play in such a line of reasoning.
As readers of this column are well aware, I am often not an expert on the topics raised and this is definitely the case here. Still, ignorance has never stopped me from speculating, so now I wonder whether there is any possibility of establishing "watcher networks" that can ingest feedback, perhaps from human or artificial observers, that would reinforce the neural network's ability to detect or at least signal that a wrong decision or choice may have been made. Colleagues at Google advise me there is a whole subfield of uncertainty estimation for neural networks,e so the model itself can give an estimate of how certain it is about an output. I am glad to hear there appears to be a chance the potential weaknesses of powerful neural networks can be minimized by applying the technology to improve their robustness. It would not be the first time the seeds of a solution were hiding in the technology that created the problem in the first place.
a. Roman poet Juvenal from his Satires (Satire VI, lines 347-348)
The Digital Library is published by the Association for Computing Machinery. Copyright © 2022 ACM, Inc.
No entries found