The ongoing revolution in artificial intelligence (AI)—in image recognition, natural language processing and translation, and much more—has been driven by neural networks, specifically many-layer versions known as deep learning. These systems have well-known weaknesses, but their capability continues to grow, even as they demand ever more data and energy. At the same time, other critical applications need much more than just powerful pattern recognition, and deep learning does not provide the sorts of performance guarantees that are customary in computer science.
To address these issues, some researchers favor combining neural networks with older tools for artificial intelligence. In particular, neurosymbolic AI incorporates the long-studied symbolic representation of objects and their relationships. A combination could be assembled in many different ways, but so far, no single vision is dominant.
Nice article. Below are my thoughts on what it would take, to get to robust intelligence.
Neural networks being likened to System1 thinking isn't valid - System1 is not just about the 'fast' aspect. We do System1, having started with System2, then internalizing the explicit steps to be able to 'skip' over them, on account of repetition (eg. playing a musical instrument, commuting to work, ordering off DoorDash, and a thousand other things we do by rote). What lets us do this, is the physical experience that the reps provide. NNs don't have this - rapidly labeling something isn't the same as taking cognitive shortcuts.
Comprehension of the world (including scientific understanding) is unlikely to result from pure data, or even by combining it with symbolic reasoning, -if- the symbols result on account of -our- setting them up (eg. via the use of knowledge graphs, or common sense rules, etc). That's because, the system would still lack genuine understanding (knowledge graphs for sure would extend mere labeling, but the combination still has limits, eg. the frame problem of transcending the ANN+KG combination remains).
For a system to become genuinely intelligent, it would need to negotiate the environment directly, and be able to represent it directly, as well.
Displaying 1 comment