Earlier this month, the White House Office of Science and Technology Policy released a Blueprint for an AI Bill of Rights. I’ve been thinking about this milestone document in the context of educational systems. Educational systems will increasingly use AI to notice patterns in teaching and learning processes and to automate educational activities and decisions.
The Blueprint advances five principles:
I’m a learning scientist—a researcher with a background both in computer science and social sciences, who works closely with AI innovators in education. I’ve led design research, working in teams to investigate the promise of new technologies for education. I’ve also conducted evaluation research, investigating "which technologies improve learning, for whom and under what conditions?"
From my standpoint, the Blueprint is timely. Investment in research and development that explore the use of AI in large social systems like education and healthcare is rapidly expanding. Further, educational technology is now very widely used at all levels of education. Decisions and policies implemented in educational technology systems can have major impacts on individual’s opportunity to learn and pathways into college and careers. Now is a critical time to think about how AI in education can increase equity of educational systems, and how we can avoid making present disparities worse.
When I talk with education-oriented researchers and with teachers, I hear excitement about the improvements AI could help us make to equitable teaching and learning processes. And yet, I also hear that we have a lot of work to do regarding trust and trustworthiness. I see the principles in the Blueprint as a good foundation for working towards greater trust.
In addition, I worry about pace and power-sharing. With regard to pace, the Blueprint has evaluation language like this: "Independent evaluation and reporting that confirms that the system is safe and effective, including reporting of steps taken to mitigate potential harms, should be performed and the results made public whenever possible." I’ve done efficacy research in education. High-quality studies take 3-4 years to do. How will evaluation of AI deployment in social systems keep up with the pace of AI innovations in those systems? More generally, how will studying the use of AI in education keep pace with growth of AI in education?
I’m heartened that educational purchasers and decision-makers now routinely call for evidence when making major product decisions. Consequently, educational technology producers realize they have to support efforts to build the base of evidence. Principles like those in the Blueprint can guide how educational purchasers and developers work together towards trustworthy AI for education. Yet I also find that the conversations about AI in education are complex; there is a huge need to build capacity to make sense of AI in education throughout our ecosystem.
With regard to power sharing, data science and machine learning alone will not solve tricky educational problems. We'll need to combine AI perspectives with principles from the learning sciences and with the wisdom of practitioners. We’ll have to engage those who will be most affected—diverse teachers and students, for example—into the design of the AI-based systems they’ll use. We'll need major advances in making AI more transparent and more explainable to educational participants, including teachers and students. The best companies in educational technology already listen to educational leaders and incorporate educators throughout their design processes. We'll need more integration of educators and learning scientists alongside AI and ML innovators from product conception through implementation.
I look forward to more discussion with others who investigate AI in social services sectors about how we can use the principles in the Blueprint in the AI Bill of Rights as a starting point, acknowledging that we will need to pay attention to the specifics of each social service sector to get it right.
Jeremy Roschelle is Executive Director of Learning Sciences Research at Digital Promise and a Fellow of the International Society of the Learning Sciences.
No entries found