acm-header
Sign In

Communications of the ACM

BLOG@CACM

Four Conversations About Human-Centric AI


View as: Print Mobile App Share:
Jeremy Roschelle

Reflecting across many conversations in the past year, I've found there are four types of conversations about human-centered Artificial Intelligence (AI). My own work has been focused on the need for policies regarding AI in education, and thus I’ve been involved in conversations about how teachers, students, and other educational participants should be in the loop when AI is designed, evaluated, implemented, and improved. I’ve been in many conversations about how surveillance and bias could harm teachers or students. And I’ve seen wonderful things emerging that could really help teachers and students. Using education as an example, I reflect here on what we talk about when we talk about human-centered AI.

 The four types of conversations are:

  1. Opportunities and Risks. The default conversation seems to be about the great things we might be able to do in the near future, if only we mitigate the risks. For example, teachers currently have jobs that have become too difficult. Administrative details get in the way of the more rewarding work of interacting with students. An opportunity with AI is to provide teachers with assistants that make their jobs easier and allow them to spend their energy working directly with students. Yet, with AI assistants come risks of teacher surveillance and of algorithm bias. There are many other risks, too. The overall conversation is about the positive future we could have, if only we can minimize the risks.
  2. Trust and Trustworthiness. A conversation about trust has a different flavor than a conversation about opportunities and risks. I find conversations get real when we ask how much we should trust an AI system to automate decisions. Educators are entrusted with the futures of today's students. If we delegate a decision to technology, are we adequately guarding students' futures? Likewise, in times when saying the wrong word in a classroom can result in lawsuits against a teacher, how can we be sure that an AI assistant is not putting a teacher's job at risk?  Should we trust our AI systems to be free of bias? In education, public discussions of systems engineering are in their infancy. Indeed, the field of learning engineering has been emerging over the past few years. More emphasis is needed on how systems should be engineered to safeguard what we hold dear, and to earn our trust.
  3. Metaphors and Mechanisms. I’ve also participated in conversations that take a more critical turn by questioning the metaphors that are used to describe the future use of AI in a societal context. Critics may question whether explaining AI "reasoning" as "human-like" obscures important ways in which AI systems may make mistakes that humans rarely make, often involving context. Metaphors may obscure the ways in which an AI system can be surprisingly brittle when conditions change. Metaphors may hide how AI systems are better at reaching goals than we are in specifying and monitoring the right goals. A companion to debunking metaphors can be digging for clear explanations of how AI actually works. For example, I've found that experts can advance public discourse by explaining AI mechanisms in non-magical, jargon-free terms, so that people can better evaluate how and why AI may make good inferences in some situations and poor-quality inferences in other situations. Overall, this is a conversation that helps the participants shift from perceived magic to gritty reality.
  4. Policies and Protections. My experience is that it's difficult to get specific about policies or protections we might need for safe AI in education. I think that's because the people who are good at talking about educational policies don't spend much time thinking about technology, and the people who are good at educational technology don't spend much time thinking about policies. Of course, there are some existing regulations in education that relate to technology, mostly regarding data security and privacy. We can start by building on those. Yet we need to go beyond data security to address issues like bias and surviellance. I encounter people who really care about the future of AI in teaching and learning, who would like policies and protections to guide a safe future, but they are not exactly sure how to contribute to a policy conversation. I’m like this too—I tend to find myself saying, "I’m not much of a policy expert." In a safe future for AI in education, we all need to be policy experts.

When we stay within only one or two of the four conversations, we limit progress towards human-centric AI. For example, the opportunities-and-risks conversation tends to be hopeful and abstract; it can appear that by naming risks, we’re already making progress to mitigating them. The future may be described in attractive terms, but it is always far off, and that makes the risks feel safer. A complementary conversation about metaphors and mechanisms can defuse the sense of magic, and help the conversants see the devil in the details.

Likewise, building trust and engineering trustworthiness are absolutely key conversations we need to have for any field of human-centric AI. Then again, the scale and power possible through AI does not always bring out the best in people, and even when people act their best, unintended consequences arise. We have to maintain skepticism. We need to distrust AI, too. Unless we talk about policies and protections, we are not engaging our rights as humans to self-govern our future reality. Not everything that will be available will be trustworthy, and we have to create governance mechanisms to keep harm at bay.

Thus, I believe it is important to notice the four kinds of conversations and use them to achieve well-rounded overall discussions about human-centric AI. I’d welcome your thoughts on the kinds of conversations you observe when people talk about human-centered AI, how typical conversations limit what we talk about, and what we can do to engage a broad diversity of people in the conversations we need to have.

 

Jeremy Roschelle is Executive Director of Learning Sciences Research at Digital Promise and a Fellow of the International Society of the Learning Sciences. 


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account