Ayanna howard, roboticist, ACM Athena Lecturer, and dean of The Ohio State University College of Engineering, is optimistic about the ability of robots to help people. She understands the challenges that must be addressed for that to happen, and has worked throughout her career not just to advance the technical state of the art, but to quantify and overcome issues including trust and bias in artificial intelligence (AI). Here, she talks about self-driving cars, accessible coding, and how to incorporate different perspectives into hardware and software design.
The pandemic heightened public interest in robots—suddenly, we all want robot cleaners and robot grocery deliverers and so on. How is that impacting the robotics community?
I see two things. First, the robotics industry is getting robots out to people much quicker than we had anticipated. The pandemic accelerated the use of robots, which lowered costs, and therefore you now see the growth of a real market in community-facing robotics.
The second thing has to do with the robotics research community. There are still a lot of unmet problems, like mobile manipulation, that we really need to solve to meet this new demand, so I anticipate a lot more funding and focus on those problems.
The other area which I am actually more excited about is social interaction. That is also accelerated, in the sense that we can now see that robots do have the ability to interact in a social way and are not necessarily replacing people.
In settings like factories, where robots probably will replace people, you have said you are optimistic about our ability to retrain workers.
I agree with the concept of the human dignity of work, but not all work is dignified. I think that there is a disconnect, because the individuals who say all work is good work are not the ones who have those jobs.
"I agree with the concept of the human dignity of work, but not all work is dignified. I think there's a disconnect, because the individuals who say all work is good work are not the ones who have lost those jobs."
But companies are also starting to think a little more about their social responsibility and saying, "If we are going to be putting these individuals out of work, maybe we should also invest in retraining them for the other kinds of jobs that are going to come about."
Let's talk about your research into overtrust. Can you summarize the problem and share some of your recent findings about using explainable AI methods to counteract it?
Prior research from my group and others has shown that when you're using robots and AI agents, and they are dependable, you start believing that they are always dependable, and you won't even second-guess yourself after a while. One thing we're looking at is ex-plaining to individuals when the system itself is uncertain in a way that just makes people better reflect on their trust in the decisions being thrown at them by these agents.
In other words, more of a contextual prompt, rather than a broad disclaimer.
Right. We've started to examine this approach for use primarily in the self-driving domain. We have all this data about dangerous intersections—two highways that merge, for example, where the accident rate is higher. You can take that information and decode it. Let's say you're in your car, on autonomous mode. It is 3 P.M., and you're about to enter an intersection that is hazardous. Now, 3 P.M. is when the kids are out. So maybe the car says, "Hey, school kids are on the road and a child died last week."
What we're finding is that people make better decisions when risk is quantified in terms that are more personal. They either pay closer attention to the road, for example, or they'll do a full override of the autonomous system to get through that intersection.
It sounds like a more context-aware version of those radar speed signs, which prompt drivers to slow down by showing them how fast they're driving.
Right. It's a trigger, but it also gives people autonomy to make the decision themselves, and that's key.
You're still on the board of directors of Zyrobotics, an organization you co-founded in 2013 to create educational technologies for children. How has that work evolved during the pandemic, when many schools were closed?
During the pandemic, Zyrobotics had to pivot to focus more on the software side and on maintaining the therapy and STEM (science, technology, engineering, and math) education apps they'd already created, because the hardware supply chains froze up. Now that things are starting to open up, they're starting to interact with school districts again, which also slowed down when everyone went remote.
Zyrobotics works really hard to make technologies that are accessible to different learners. How do you make them financially accessible?
One of the projects I'm still involved with is in the area of accessible coding, looking at the intersectionality of disability and socioeconomics. If you're a parent from a middle-class neighborhood and your child has a disability, you pull your resources together and provide the scaffolds needed for your child. It's different in low SES (socioeconomic status) communities and unfortunately, the lack of resources also has an intersection with ethnicity and race.
"What we're finding is that people make better decisions when risk is quantified in terms that are more personal."
This project did two things. First, we developed an open source robotics platform that's based on Arduino and rapid prototyping machines. This is modeled after a philosophy similar to the Helping Hand Project (www.helpinghandproject.org), which creates open source software and designs that college engineering students and even high school students can use to build hands for children and adults who have lost them. It can be very low-cost; it might cost maybe a hundred dollars to make a hand.
So you provide all of the software, plans, and designs to enable things to be built?
Yes, but we are also developing a software equivalent. If you can't get a local college to build the robotic hardware for a K–12 school, then you can download the software equivalent of a virtual world where you're learning the coding, and it's accessible for children with visual or hearing impairments or with motor disabilities.
You've made the point that designing educational tools for kids with special needs is actually a good way of designing educational tools for all kids. Can you elaborate on that?
A lot of times, even at the college level, instructors teach things based on the way that they learn. So, if I like to write and I learned by reading a lot, then I'm going to give my students a lot of reading assignments. But students have very different ways of consuming and processing information. People know that about children with special needs, but I don't think they realize that children have these nuances across the board. So when you design for what I call the extremes, you also incorporate the different learning styles of children who don't necessarily fit into the box that the teacher is teaching from.
How does that philosophy work when you're designing outside of an educational context?
I encourage people to think about who their opposite is in terms of attributes, and design for that person. If I'm a technologist living in a high-SES neighborhood, then I need to think about designing solutions for someone who is in, say, rural America. Then what happens is, even though that's not my lived experience, it makes me sit back and start rethinking my design choices. It doesn't get you to the other extreme, but it does break the habit of designing based on what you know, and it makes you explore other things you might otherwise not have.
It's difficult to argue against the idea of designing for a diverse audience and incorporating different perspectives. But it's also difficult to put in practice.
The practice part is still difficult, but I'm seeing much more concern about it at the upper management levels in industry and academia. That means it's propagating downward, whereas before it was coming from the grassroots. When it comes from the top, people tend to say, "Let's figure this out, and here are the incentives to institute change." So, I'm starting to see movement. A lot of people in the community are frustrated that the movement isn't fast enough, but I've been in this field for a long time, and if I look at the Delta of movement now versus the Delta of movement 20 years ago, it's exponential.
©2022 ACM 0001-0782/22/9
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and full citation on the first page. Copyright for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or fee. Request permission to publish from [email protected] or fax (212) 869-0481.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2022 ACM, Inc.
No entries found