acm-header
Sign In

Communications of the ACM

BLOG@CACM

Computing Education Must Go Beyond Intuition: The Need For Evidence-Based Practice


View as: Print Mobile App Share:
Mark Guzdial

Georgia Institute of Technology professor Mark Guzdial

The highest-quality education practice is evidence-based education defined by the US Department of Education as "the integration of professional wisdom with the best available empirical evidence in making decisions about how to deliver instruction" (see website). The movement towards evidence-based practice has already swept through medicine (see here), psychology (see here), and other fields.

Computing educators' practice would dramatically improve if we drew on evidence, rather than intuition. Raymond Lister has been writing for years about the problem that computing education practitioners do not engage with evidence and the research on education (see his 2012 Inroads article). Raymond calls the knowledge that computing educators use in making teaching decisions "folk pedagogy" (see this article).

Folk pedagogy encourages what is believed to be best practice, but it cannot validate best practice.

We have evidence that computing teachers don't use evidence. Davide Fossati and I studied 14 computer science teachers from three different institutions (see our SIGCSE 2011 paper). Davide asked them about times that they made a change in their teaching practice -- why did they make the change and how did they know if it was successful or not. They used their intuition, informal discussion with students, and anecdotes. Not a single teacher used evidence such as class performance on a test problem or homework.

Without evidence, teachers rely on their intuition, informed by their experience. Sometimes that intuition may be informed by many years of experience. Sometimes that experience is not at all relevant.

The March 2015 issues of Inroads (linked here) has a special section on "The role of programming in a non-major, CS course." That's a topic of great interest to me, since I've been developing and studying a programming-based approach to teaching introductory computing that we use with non-CS majors at Georgia Tech for over 10 years (see summary paper here). The first article (see here), by Richard Kick and Frances Trees, described the AP CS Principles course, its research-based development and experience at the pilot sites. The article by Steve Cooper and Wanda Dann (see here) presented findings from early research on teaching programming to children and evidence from courses at Stanford and CMU using Alice. The rest of the papers use no evidence.

Henry Walker's first article (see article here) argues that programming should not be a priority for a non-CS majors course.

Development of student problem-solving skills (with programming) requires time, so the inclusion of programming within a non-CS majors CS course comes at a price: significant other topics must be dropped.

The claim that "significant other topics must be dropped" is an empirical claim. Since there are many non-CS majors CS courses with programming described in the ACM literature (like this one at CMU and this data-centric one at Boston University and this one at Kalamazoo College and the one I developed at Georgia Tech), one could clearly consider those courses and identify what significant topics were dropped in favor of programming.

Henry doesn't consider these contrasting cases, but he does offer recommendations of what a non-major student ought to know about CS. Are CS faculty the right ones to define the learning goals for students who do not plan on a career in computer science? When we designed Media Computation, we formed an advisory board of non-CS faculty to tell us what their students needed to know about computing. For example, a professor in Architecture wanted his students to understand the difference between Photoshop and a CAD tool for manipulating a diagram. While both can "extend a line," Photoshop manipulates pixels while CAD tools manipulate a vector representation that can later be used to control automated devices like milling machines. I hadn't realized that that was an important data representation learning objective for Architecture.

Michael Goldweber makes two arguments in his paper against including programming in a non-CS majors course (see paper here). His first argument is like Henry's in that he offers what should be in a non-CS majors course. He suggests an "embarrassment model of course and curriculum development" which he describes as "one enumerates the topics for which, if their students did not know about/have experience with, one should feel embarrassment."

I assert that this topics list does not include programming, not because programming is without merit, but because the inclusion of programming requires a time commitment that then leaves an insufficient quantity of time for the key "embarrassment" topics. These include algorithmic problem solving, the use of abstraction to tame complexity, and the limits of computation...It is my position that it would be an embarrassment if a student coming out of a non-CS majors course could successfully program a solution to Selection Sort or write a program that draws a fractal tree, but have no idea of how to apply algorithmic problem solving to real-world problems.

Michael suggests an example real-world problem that he believes non-CS majors should be able to solve after their introductory course to CS.

Given a graph representing cities and connecting highways, some of the cities house a Red Cross warehouse while one other city experiences a disaster; describe an algorithm for locating the closest Red Cross warehouse.

My suspicion based on research evidence is that few CS majors or non-CS majors would be able to solve this problem, even after several classes (with or without programming). Transfer of knowledge is hard to achieve, and students are particularly challenged to recognize that a real-world problem is related to another knowledge that they have learned (see the chapter on transfer in How People Learn). In the decades of studies that have tried to find such transfer, the research evidence is that computing courses do not help students develop general problem-solving skills (see paper reference here). However, that's just my suspicion. We could gather evidence to see if students get closer to achieve Michael's goal with a programming course or a non-programming course. Michael doesn't offer any evidence.

Michael's second argument is about the cost of programming in time and tedium. He dismisses the use of languages such as "Pascal, Java, Python" in which students engage in "wrestling matches with compilers over a missing or misplaced semi-colon or squirrely bracket." He disparages graphical programming languages ("Scratch, Alice, Kodu") as a "children's introduction to programming."

Should the learning outcomes for a child's introduction to the field be what we want for one's singular university-level non-major's course?

My colleague Betsy DiSalvo recently published a paper about a study with African-American teens who learned both Python and Alice (see blog post about her study). There wasn't a clear winner. Rather, Betsy found that the preference depended on the careers that these students desired. Those who were interested in computer science as a career preferred Python. Those who were interested in media and design preferred Alice because they liked what they created, but also because the graphical nature made the high-level structure more evident. What one participant told Betsy in her evidence-based study sounds much like what Michael and Henry want students to achieve:

I like the top-down design (in Alice projects). We are able to break down the bigger problems into smaller problems and then even smaller problems, so it’s a simpler way to file through. I think you could probably apply top-down design to anything in life.

The biggest concern I have for making education decisions without evidence is who is making the educational decisions for whom. Most CS faculty (reflected in the authorship of these articles) are white and male. Because we are CS faculty, most of our experience is rooted in being students in STEM fields.

Our intuition is most likely wrong about non-CS majors. Most non-CS majors are not students in STEM. Non-CS majors are much more diverse. We don't know their experience, their goals, or their desired careers. We should not assume that we know how to teach non-CS majors. We should not assume what non-CS majors need to know. When we want to know what CS majors will need in their careers, we ask industry advisors (as the CS2013 curriculum process did, see article here). We know much less when non-CS majors might need to know in their careers. We need to do the research and gather evidence to determine what non-CS majors need to know about computing.

We need the humility to recognize what we don't know, and we need evidence to inform our decisions. You don't want your surgeon to apply the best practices of folk medicine on you. Our students deserve the best educational practices informed by evidence, not folk pedagogy informed by intuition.


Comments


Clif Kussmaul

Mark, I agree completely with your main points: CS educators should do more to base our practice on evidence rather than intuition, because that intuition is likely wrong. I would go further in several ways.
First, our intuition is likely wrong about CS and STEM majors too, even if we think we know what they need to know, since most of our students are unlike us.
Second, we should also leverage evidence-based practices that give us more direct evidence about our students as they learn. For example, approaches like process-oriented guided inquiry learning (POGIL) and peer instruction also help us observe students as they grapple with new ideas so we can respond more quickly to problems. When my students work through a POGIL activity, I see where they need help, I can respond immediately, and I know what to revise for the future. It's easy for lecturers and their students to overestimate what has been "learned" until it has to be applied.
On the other hand, I'm not sure how far behind we are in CS education - folk-pedagogy still seems to be the norm on many campuses.

Clif Kussmaul
Muhlenberg College


Mark Guzdial

Completely agreed, Clif! I'm a huge fan of Peer Instruction (http://www.peerinstruction4cs.org/) because it gives me evidence about my class, what they know and what they don't.

Carl Wieman reports that over 70% of physics teachers (University and high school) are familiar with Physics Education Research results and use at least one finding in their teaching. We in CS Ed are not there yet.


Lisa Kaczmarczyk

Henry Walker and Michael Goldweber have written a response to this post wrt to comments you made about their articles. As the editor of the special section that included their articles, I thought I'd take it upon myself to share the link to that response over on the ACM Inroads blog. http://tinyurl.com/mwqspgy


Displaying all 3 comments