Deep neural networks (DNNs) are rapidly becoming an indispensable part of the computing toolbox, with particular success in helping to bridge the messy analog world into forms we can process with more conventional computing techniques (image and speech recognition, as some of the most obvious examples).
The price we pay, however, is inscrutability: DNNs behave like black boxes, without clearly explainable logic for their functioning. Admitting for the moment that most complex software systems are also approximately impossible to fully reason about, we have—and continue to develop—methods for formally reasoning about and extensively testing critical components. Almost nothing equivalent exists for DNNs. This is particularly worrying precisely because of the power of DNNs to allow us to extend computing into domains previously inaccessible. In at least one area of medical diagnostics—identifying diabetic retinopathy—DNN-based approaches already match expert human performance, but we have little experience yet to help us understand what kind of bugs those systems may fall prey to when deployed in the real world.
No entries found