acm-header
Sign In

Communications of the ACM

BLOG@CACM

Refining Students' Coding and Reviewing Skills


http://bit.ly/1kf07PP
June 19, 2014

Code reviews (http://bit.ly/Ve4Hbw) are essential in professional software development. When I worked at Google, every line of code I wrote had to be reviewed by several experienced colleagues before getting committed into the central code repository. The primary stated benefit of code review in industry is improving software quality (http://bit.ly/1kvYv4q), but an important secondary benefit is education. Code reviews teach programmers how to write elegant and idiomatic code using a particular language, library, or framework within their given organization. They provide experts with a channel to pass their domain-specific knowledge onto novices; knowledge that cannot easily be captured in a textbook or instruction manual.

Since I have now returned to academia, I have been thinking a lot about how to adapt best practices from industry (http://bit.ly/1iPU31a) into my research and teaching. I have been spending the past year as a postdoc (http://bit.ly/1qRnowt) in Rob Miller's (http://bit.ly/1mKQybj) group at the Massachusetts Institute of Technology Computer Science and Artificial Intelligence Laboratory (MIT CSAIL, http://www.csail.mit.edu/) and witnessed him deploying some ideas along these lines. For instance, one of his research group's projects, Caesar (http://bit.ly/1o9AXVg), scales up code review to classes of a few hundred students.

In this article, I describe a lightweight method Miller developed for real-time, small-group code reviews in an educational setting. I cannot claim any credit for this idea; I am just the messenger.

Back to Top

Small-Group Code Reviews for Education

Imagine you are a professor training a group of students to get up to speed on specific programming skills for their research or class projects.

Ideally, you would spend lots of one-on-one time with each student, but obviously that does not scale. You could also pair them up with senior grad students or postdocs as mentors, but that still does not scale well. Also, you now need to keep both mentors and mentees motivated enough to schedule regular meetings with one another.

Here is another approach Miller has been trying: use regular group meeting times, where everyone is present anyway, to do real-time, small-group code reviews. Here is the basic protocol for a 30-minute session:

  1. Gather together in a small group of four students plus one facilitator (for example, a professor, TA, or senior student mentor). Everyone brings their laptop and sits next to one another.
  2. The facilitator starts a blank Google Doc and shares it with everyone. The easiest way is to click the "Share" button, set permissions to "Anyone who has the link can edit," and then send the link to everyone in email or as a ShoutKey shortened URL (http://bit.ly/1mUHq8U). This way, people can edit the document without logging in with a Google account.
  3. Each student claims a page in the Google Doc, writes their name on it, and then pastes in one page of code they have recently written. They can pass the code through an HTML colorizer (http://bit.ly/1jKDaE9) to get syntax highlighting. There is no hard and fast rule about what code everyone should paste in, but ideally it should be about a page long and demonstrate some nontrivial functionality. To save time, this setup can be done offline before the session begins.
  4. Start by reviewing the first student's code. The student first gives a super-quick description of their code (<30 seconds) and then everyone reviews it in silence for the next four minutes. To review the code, simply highlight portions of text in the Google Doc and add comments using the "Insert comment" feature. Nobody should be talking during these four minutes.
  5. After time is up, spend three minutes going around the group and having each student talk about their most interesting comment. This is a time for semi-structured discussion, organized by the facilitator. The magic happens when students start engaging with one another and having "Aha! I learned something cool!" moments, and the facilitator can more or less get out of the way.
  6. Now move onto the next student and repeat. With four students each taking ~7.5 minutes, the session can fit within a 30-minute time slot. To maintain fairness, the facilitator should keep track of time and cut people off when time expires. If students want to continue discussing after the session ends, then they can hang around afterward.

That is it! It is such a simple, lightweight activity, but it has worked remarkably well so far for training undergraduate researchers in our group.

Back to Top

Benefits

Lots of pragmatic learning occurs during our real-time code reviews. The screenshot above shows one of my comments about how the student can replace an explicit JavaScript for-loop with a jQuery .each() function call (http://bit.ly/1sYfa9O). This is exactly the sort of knowledge that is best learned at a code review rather than by, say, reading a giant book on jQuery. Also, note that another student commented about improving the name of the "audio" variable. A highly motivating benefit is that students learn a lot even when they are reviewing someone else's code, not just when their own code is being reviewed.

With a 4-to-1 student-to-facilitator ratio, this activity scales well in a research group or small project-based class. This group size is well-suited for serendipitous, semi-impromptu discussions and eliminates the intimidation factor of sitting one-on-one with a professor. At the same time, it is not large enough for students to feel anonymous and tune out like they would in a classroom. Since everyone sees who else is commenting on the code, there is some social pressure to participate, rather than zoning out.

A side benefit of holding regular code reviews is that it forces students to make consistent progress on their projects so they have new code to show during each session. This level of added accountability can keep momentum going strong on research projects since, unlike industry projects, there are no market-driven shipping goals to motivate daily progress. At the same time, it is important not to make this activity seem like a burdensome chore, since that might actually drain student motivation.

Back to Top

Comparing with Industry Code Reviews

Industry code reviews are often asynchronous and offline, but ours happen in a real-time, face-to-face setting. This way, everyone can ask and answer clarifying questions on the spot. Everything happens within a concentrated 30-minute time span, so everyone maintains a shared context without getting distracted by other tasks.

Also, industry code reviews are often summative assessments (http://bit.ly/1z8kOH1) where your code can be accepted only if your peers give it a good enough "grade." This policy makes sense when everyone is contributing to the same production code base and wants to keep quality high. Here, reviews are purely formative (http://bit.ly/1jKElU6) so students understand they will not be "graded" on it. Rather, they understand the activity is being done purely for their own educational benefit.

Back to Top

Parting Thoughts

When running this activity, you might hit the following snags:

  • Overly quiet students. It usually takes students a few sessions before they feel comfortable giving and receiving critiques from their peers. It is the facilitator's job to draw out the quieter students and to show that everyone is there to help, not to put one another down. Also, not all comments need to be critiques; praises and questions are also useful. I have found that even the shyest students eventually open up when they see the value of this activity.
  • Running over time. When a discussion starts to get really interesting, it is tempting to continue beyond the three-minute limit. However, that might make other students feel left out since their code did not receive as much attention from the group, and it risks running the meeting over the allotted time. It is the facilitator's job to keep everyone on schedule.
  • Lack of context. If each student is working on their own project, it might be hard to understand what someone else's code does simply by reading it for a few minutes. Thus, student comments might tend toward the superficial (like "put an extra space here for clarity"). In that case, the facilitator should provide some deeper, more substantive comments. Another idea is to tell students to ask a question in their code review if they cannot think of a meaningful critique. That way, at least they get to learn something new as a reviewer.

Try running a real-time code review at your next group or class meeting, and email me at [email protected] with questions or suggestions.

Back to Top

Author

Philip Guo is an assistant professor of computer science at the University of Rochester.

Back to Top

Figures

UF1Figure. The author comments on a student's code.

Back to top


©2014 ACM  0001-0782/14/09

Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and full citation on the first page. Copyright for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or fee. Request permission to publish from [email protected] or fax (212) 869-0481.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2014 ACM, Inc.


 

No entries found