acm-header
Sign In

Communications of the ACM

Cerf's up

Quis Custodiet Ipsos Custodes?


Google Vice President and Chief Internet Evangelist Vinton G. Cerf

Attributed to Juvenal,a this title phrase translates to "Who will watch the watchers?" In the 21st century, we may well ask this question as we invest increasingly in machine learning methods, platforms, applications, and designs. Nowhere is this more evident as we see increased excitement and investment in artificial intelligence (AI) for military application. In some ways, this is an old story. Early computers were used to improve the calculation of ballistics settings and, with the invention of radar, automatic fire-control systems became important parts of offensive and defensive systems. That an international "AI race" is under way is incontrovertible. That is not the whole story, of course. Machine learning methods have yielded stunning scientific results, such as the computed folding of some 200M proteins recently announced by Deep Mind.b Natural language recognition, speech synthesis, conversational chatbots, "deep fakes," and myriad other applications are emerging from the deep learning tools that have already been developed.

Among the more intriguing developments is the notion of "generative adversarial networks," in which one neural network attempts to fool another into recognizing something that it isn't.c The process can be used to reinforce the correct operation of a discriminating (for example, recognition) neural network. Of course, the concept can also be used to defeat successful recognition. Small modifications to automobile markings or sign markings can defeat successful recognition (for example, camouflage). These examples sometimes illustrate the difference between human and machine image recognition. Of course, there are also ample examples that show how human vision can be tricked and confused—Maurice Escher drawings come immediately to mind.d

What does all this have to do with Juvenal's phrase? The more I think about the potential brittleness of neural networks and recognition or decision making, the more I wonder how we will be able to tell when a neural-network choice or decision is incorrect? There is an increasing number of examples of the use of neural networks for facial recognition or loan application evaluation or parole decisions. In many of these cases, the statistics of the so-called training sets used to develop the weights of the "neurons" in the neural network are biased by flawed decisions by humans reflected in the data. In effect, the training set biases the results when applied to new applications.

In some cases, these flaws might result in serious harm. Mistakes by self-driving cars, attacks against friendly forces, incorrect diagnoses of cancer, and unfair decisions that have negative consequences are all drawn from real-world examples that should give us pause. Of course, humans have also been known to make these same mistakes. The question this raises in my mind is whether it is possible to train independent neural networks that might warn of mistakes like this before they can cause harm. The generative adversarial network concept sounds as if it might have a role to play in such a line of reasoning.

As readers of this column are well aware, I am often not an expert on the topics raised and this is definitely the case here. Still, ignorance has never stopped me from speculating, so now I wonder whether there is any possibility of establishing "watcher networks" that can ingest feedback, perhaps from human or artificial observers, that would reinforce the neural network's ability to detect or at least signal that a wrong decision or choice may have been made. Colleagues at Google advise me there is a whole subfield of uncertainty estimation for neural networks,e so the model itself can give an estimate of how certain it is about an output. I am glad to hear there appears to be a chance the potential weaknesses of powerful neural networks can be minimized by applying the technology to improve their robustness. It would not be the first time the seeds of a solution were hiding in the technology that created the problem in the first place.

Back to Top

Author

Vinton G. Cerf is vice president and Chief Internet Evangelist at Google. He served as ACM president from 2012–2014.

Back to Top

Footnotes

a. Roman poet Juvenal from his Satires (Satire VI, lines 347-348)

b. https://bit.ly/3QHtUaB

c. https://bit.ly/3ColujT

d. https://en.wikipedia.org/wiki/Relativity_(M._C._Escher)

e. https://arxiv.org/pdf/2011.06225.pdf


Copyright held by author.
Request permission to (re)publish from the owner/author

The Digital Library is published by the Association for Computing Machinery. Copyright © 2022 ACM, Inc.


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account
Article Contents: