acm-header
Sign In

Communications of the ACM

BLOG@CACM

Will Machine Learning Prevent ­S From Achieving the Goals of Teaching Computing to Everyone?


View as: Print Mobile App Share:
Mark Guzdial

Georgia Institute of Technology professor Mark Guzdial

Last week, I was thrilled to be one of the SIGCSE-track keynote speakers for the ACM 50th Celebration of the Turing Award in China (TURC) held in Shanghai, along with SIGCSE Chair Amber Settle and Berkeley teacher, inventor/developer, and whirlwind of energy, Dan Garcia. (See my blog post about TURC here, and the program for the event here.)

One of my favorite sessions was a panel on the future of AI. Is it about data and machine learning, or about analyzing human intelligence? Is it about Turing machines, or about quantum computers?

Big Data or Brain Powered Artificial Intelligence: Turing or Quantum?  (Fuyue Hall)

Panel Moderator: Andrew Yao, Xiangyang Li

Panel Members: Andrew Yao, Alex Wolf, Vinton Cerf, Wen Gao, Kai-Fu Lee, Xiangyang Li, Bobby Schnabel

I particularly resonated with Vint Cerf's and Andrew Yao's comments. They both argued that the future of AI applications will be driven by data and machine learning techniques, not from studying human intelligence. Yao pointed out that the human brain is just too complex, and we haven't made enough progress yet in understanding it. Cerf said that comparing the number of computational neuron-like elements to the number in the human brain is simply not useful -- the human ones are far more powerful and complex.

Cerf went on to say that he thought Stephen Hawking's warning about the threat of AI to humanity is just wrong. Cerf said that the much bigger threat is human programmers. "Everyone who writes software puts users at risk, especially if that software has autonomy.  That's the biggest threat."

I also really liked Bobby Schnabel's comment that we are making a mistake if we're just focusing on the technology. "Computing is increasingly a sociotechnological field," he said. The social science and the social needs define the future of the computational world.

I asked a question at the tail end of the discussion section.

Members of this panel are probably aware that the first Turing Award winner, Alan Perlis, was also the first to argue for Computing for All. He explicitly argued that we should teach computing, and explicitly programming, to everyone at every university. He argued for the benefits of giving computing to everyone.

At the same 1961 event, C.P. Snow, author of The Two Cultures, warned that we needed to teach everyone about computing so that they could understand the forces that are influencing their lives. Snow foresaw that computing would be used to make decisions that would change the lives of everyday people. He wanted everyone to understand how computing worked, so that they could at least ask the right questions about it.

Now, we have these big data-driven machine learning systems that no one understands the output from. They make clusters, and we don't know what the clusters mean.

Is Snow's point now irrelevant? Is it no longer possible to teach people the computing that's in people's lives? Or, do you place a requirement on the developers of these new AI systems that the systems be understandable, that they explain themselves?

Two of the panelists responded, but they both jumped on the notion that we have to teach programming in order to teach computing. They both argued that we don't. (I didn't get a chance to respond -- we have to teach programming, too. You don't ask students to understand mathematics, algebra, or calculus without giving them written digits and equations. The notations matter and, designed well, help learning.) None of the panelists responded to my larger point. As we move toward an AI and quantum definition of computing, do we give up on helping the average citizen to understand these forces that influence their lives? Especially if we computer scientists don't?

I end with a selfie with ACM SIGCSE Chair, Amber Settle, at the TURC Banquet.


Comments


Kathi Fisler

"Don't we have to teach programming to understand AI computation" is an underspecified question.
It fails to specify _which machine_ people might need to understand (via programming). I suspect most people hear "teach programming" and think about learning to control a single process through basic constructs like loops, conditionals, and so forth. But the machine in question here isn't a simple processor with stack/memory/registers/etc. The machine is a system that integrates data and patterns from multiple sources, using statistical inference to develop rules for making future predictions.

When I think of conversations with non-CS family and friends about issues like online privacy, I'm struck by the inaccuracies in their models of data sharing/provenance/etc. They can't understand the tools or threats because they've conflated all sorts of data operations and boundaries that are highly relevant. Teaching these folks conventional programming wouldn't help. Teaching them a more accurate model of cloud-based applications, where data live, how data move, etc (and how to "program" this model) would have a chance of being useful.

I agree with Mark that "programming" teaches a model of a machine and its notations, and that that learning is important for understanding the tools around us. But we have to center this around the right machine to program in the first place. Our default interpretation of "programming" fixes at the wrong machine.

Kathi Fisler


Displaying 1 comment

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account