acm-header
Sign In

Communications of the ACM

ACM News

'Let a Thousand AIs Bloom'


View as: Print Mobile App Share:
Data science and philosophy professor David Danks.

"Computer science students don’t need to become ethicists, and philosophy students don’t need being able to write code, but we need to teach them how to collaborate and understand each other."

Credit: DavidDanks.org

The field of artificial intelligence (AI) has been dominated by the deep learning approach in recent years, and there is some concern that focus may be limiting progress in the field. David Danks, a professor of data science and philosophy at the University of California, San Diego, advocates for more diversity in AI research or, as he puts it, "let a thousand AIs bloom."

Bennie Mols interviewed Danks at the 2023 AAAS Annual Meeting in Washington, D.C.
 

What has led you to the conclusion there is too little diversity in the AI field?

We have seen enormous advances in the ability of AI, and in particular deep learning, to predict, classify, and generate what we might think of as the surface features of the world. These successes are marked by two fundamental features that don't always hold: having a measurement of what matters, and being able to define what counts as success. Deep learning can do amazing things, but what worries me is that it crowds everything else out.

Such as…

We have struggled to come up with AI systems that can discover the underlying structure of the world, things that show up in the data but are not defined by them. So one reason that we are struggling with developing more trustworthy and value-centered AI is because trust and values fundamentally are not things that we know how to give numerical expressions for.

Can you give an example?

It is difficult to figure out what counts as success for a self-driving car. Sure, we want to drive safely, but what counts as driving safely is very context-dependent. It depends on social norms, it depends on the weather, it depends on suddenly occurring situations on the road. As soon as there is an unusual context, self-driving cars can't reason their way out like a human driver can.

What is your proposal for increasing the diversity of AI research?

First, people need to realize that there are problems we are not considering because of the focus on deep learning. Deep learning is not good at symbolic reasoning, not good at planning, not good at reconciling conflicts between multiple agents that have different values. We need to let a thousand AIs bloom because we need different frameworks and we need to look at problems we are not considering because of the focus on deep learning.

Second, funding agencies in particular should be supporting the work that companies don't want to support. Right now, most companies are putting most effort into deep learning.

Third, I also think that there is an enormous opportunity for entrepreneurs to identify problems that deep learning is not going to solve and come up with new methods and new systems. If I were an entrepreneur, I would stay far away from deep learning, because I am not going to compete with the big tech companies.

How do you, as a philosopher, look at the recent hype around ChatGPT and similar large language models?

For me, the most interesting aspect is that ChatGPT is calling into question how deep a lot of our human conversation actually is. So much of human language seems to be highly predictable or ritualized; anybody who has ever taught classes for some years knows this. There are times when you just go in the classroom and you start talking on autopilot. I came to realize that my own speech is not nearly as profound as I might have thought it was.

Does ChatGPT have consequences for your way of teaching?

I put all the assignments for my classes through ChatGPT to see how it performs, and it did very badly. ChatGPT is particularly good at giving a Wikipedia-level summary of a topic, but it is bad at reasoning, drawing inferences, logical conclusions, and constructing good arguments. ChatGPT might actually push teachers to make better assignments by avoiding assignments that it can answer well.

What do you want your students to learn from philosophy about AI?

Computer science tends to focus on the part of the process that goes from numbers in the form of data to a model, but computer science is not going to tell you what problems to tackle, whether a particular method of collecting data invades people's privacy, or which measure of fairness to use. I want my students to realize that their AI classes focus on a very small, although obviously critical, part of a large pipeline of a particular problem. Computer science students don't need to become ethicists, and philosophy students don't need being able to write code, but we need to teach them how to collaborate and understand each other.

I read that you like philosophical puzzles. Can you offer a puzzle that is particularly interesting from the point of view of AI?

On the one hand, we want our technology to compensate for human biases and other weaknesses. For example, we want the autopilot on an airplane to compensate for the cognitive limitations of the human pilot. But other times we don't want that at all. If you use a smartwatch to track the number of steps you take, you don't want your watch to lie about the number of steps because it thinks you need to lose weight. Or take the example of a dataset showing that people of color are under-diagnosed for a particular disease. Should an AI developer then create an algorithm that says people of color are at higher risk on the assumption that this will be offset by human bias to where it should be? There are many such examples in AI applications.

So the philosophical puzzle is: Where is the boundary in the middle? That seems like an incredibly difficult puzzle.

 

Bennie Mols is a science and technology writer based in Amsterdam, the Netherlands.


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account