acm-header
Sign In

Communications of the ACM

ACM News

Only Humans Can Be Accountable For AI


View as: Print Mobile App Share:
Joanna Bryson

After over a decade in the computer science department of the U.K.'s University of Bath, Joanna J. Bryson will move to the Hertie School of Governance in Berlin, Germany next year as a professor of ethics and technology.

Credit: University of Bath

"I am totally astounded when I meet computer scientists who say, 'if we just add one more layer of intelligence to our machine, then it will become sentient. That's such a misunderstanding of what human intelligence is like."

Eloquent and outspoken, U.S.-born Joanna J. Bryson is moving to the Hertie School of Governance in Berlin, Germany in 2020 as their new professor of ethics and technology, after over a decade in the computer science department of the U.K.'s University of Bath. She studies the phenomenon of intelligence from the perspectives of computer science, psychology, and biology. Her research ranges from artificial intelligence (AI) and autonomy to robot ethics and human cooperation.

As a child, Bryson recalls, "I loved dinosaurs and dreamed about becoming a paleontologist. I read all the books I could find on animal behavior. Later, in my teenage years, I was inspired by primatologist Jane Goodall."

Bryson went on to study behavioral sciences and psychology, earning a bachelor of arts degree in behavioral science from the University of Chicago. After she discovered her talent as a programmer, she changed the focus of her studies to computer science, and earned a master of science degree in AI from the U.K.'s University of Edinburgh. In 2001, she earned a doctorate in computer science, focusing on AI, from the Massachusetts Institute of Technology.

Today, she says, "I consider myself a natural scientist who uses computer science as a tool to understand intelligence."

Bryson was asked to give the keynote lecture at the celebration of the 25th anniversary of the Tilburg Institute for Law, Technology, and Society (TILT) at Tilburg University in the Netherlands. The day before she gave the lecture entitled "The Role of Humans in an Age of Intelligent Machines," Bennie Mols spoke with Bryson about her fascination for the intelligence of humans and machines.

What can AI researchers learn from the study of natural intelligence?

First of all, being human is more than being intelligent. We are apes, a deeply social species. We consider being socially isolated as a form of torture. Cooperation is, in evolution, just as important as competition. Even guppies that are left alone can die of fright. In order for evolution to protect us, we look after each other. There is no reason to build that into AI.

I was fascinated at a young age by the fact that different parts of the brain have a different architecture. Why has evolution, in four billion years, failed to develop a single architecture for our brain? The answer is that there is so much to learn that different parts of the brain have specialized on different tasks. Because of evolution, they are all trained in their own way. So, the brain got a modular design.

We are not going to build something out of silicon that has the same experiences as an organism. It's not going to have the same needs. It's not going to have the same phylogeny. A single brain cell already has on the order of 10,000 connections. We are not going to scan a brain with 100 billion brain cells and reproduce it in silicon. It's computationally intractable, it's infeasible, it's ridiculous."

Looking at it the other way around, what can researchers of natural intelligence learn from AI?

What I just told you about the brain has to do with the problem of combinatorics: there are so many possible things to know, so many different ways to reason, that you'd get a combinatorial explosion if brain parts would not specialize. Furthermore, it's not just that neuroscience has inspired machine learning, it's also the other way around. Machine learning tools are now also successfully being used to understand the brain better.

Does AI have fundamental limits?

To calculate something, a computer needs time, space, and energy. Computing is something physical; that is often forgotten. The game of chess already has more possible board positions than there are atoms in the universe. Well, biology offers many more combinations. No AI is going to offer a solution for all problems. There is no free lunch.

What are our obligations to intelligent machines?

The best metaphor for thinking about AI is to see it as an extension of our own intelligence. If you assume that, then our obligations to machines are obligations to humans. Therefore, only humans can be accountable for AI. That implies that AI should not receive legal rights.

I wrote a paper titled "Robots Should Be Slaves." The point was that robots are servants that we buy and sell, and the word for servants that we buy and sell is "slaves." We all agree that humans can't be bought and sold. The fact that robots are bought and sold also means we shouldn't want them to be human. That's a fundamental argument.

Earlier this year, you were appointed a member of the Google Ethics Board. The board was shut down rather quickly. What went wrong?

We haven't heard from Google the reason for dismantling the board. Many people think it was because one of the board members was considered by the protesters to be on the wrong side of the political spectrum. I don't agree with that, because you need diversity in a panel. However, following the discussions it became clear that a bunch of protesters were just out for the destruction of the board. They wanted to believe that Google is evil.

How can you make progress in the future with the ethical boards of big tech companies?

Personally, I wanted to communicate with Google about ethical issues in the company. Even if I couldn't directly influence them, I could help governments to communicate better with them.

For me, it is about finding a better way to work together. If we are just making laws to limit big tech, we are not seeing the whole problem.

Big tech is best aligned with liberal democracies for trying to keep people healthy and happy. We should be reaching out to them. They are our natural allies, so dealing with them needs a kind of diplomacy.

Bennie Mols is a science and technology writer based in Amsterdam, the Netherlands.


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account