acm-header
Sign In

Communications of the ACM

BLOG@CACM

Software Development and Crunch Time; and More


BLOG@CACM logo

http://cacm.acm.org/blogs/blog-cacm/70922

Given the increased risk of burnout for an extended "crunch time," why do developers put up with it?

For software developers, crunch time is a period prior to a major product milestone when team members are asked to put in extra effort to get a product finished by a specific delivery date. Practically, this can be a horrific period of 80-plus hour weeks that goes on for months as the team scrambles to deal with bugs, last-minute feature requests and modifications, and milestones. For game companies and large Internet retailers in particular, the mantra of "Christmas never slips" means that crunch time occurs during the summer so products can be released by the fall and be available between Thanksgiving and Christmas. Recently, the wives of Rockstar Games posted an open letter to the company's management about the impact of the crunch time on their lives. The company was demanding 6–7 days a week, with 12–16 hour days. The impact included mental, physical, and emotional strain on the employees and their families.

Reading the discussions on Slashdot about Rockstar Games' working conditions highlighted that this problem is industrywide. As a developer and manager, I have worked on a number of projects at various startups that involved periods of crunch time that lasted longer than I thought was realistic. When I was a young engineer working on Amazon Auctions, doing the all-nighter was a badge of honor. Eventually, I discovered most of the code you write during those A.M. hours will likely be thrown away. After a few crunch times, I learned to be a better self-advocate, and was able to sensibly set expectations of what combination of features, quality, and testing I could deliver by a given date. When I made the transition from developer to manager, I was glad to have had the experience so I could advocate for my teams. Although I couldn't always get rid of crunch times, I worked to keep their durations as short as possible.

Why do developers put up with crunch time? I believe the reason is as simple as "progress." "Progress" was the factor that was most important to 12,000 workers, according to two researchers who analyzed the workers' diary entries.

As long as the workers believe they are making headway in delivering their product, they are getting an intrinsic reward that motivates them to work more. If you have a team making progress on a delivery, the combined effort of the team will self-reinforce and encourage them all for their efforts. On Amazon Auctions, I worked on implementing search for the system and would nap while another team member would deliver new catalog content. By the time I returned, we would integrate our code, which would result in a complete auction search results. The work was rewarding despite our working through weekends to complete the project. The progress was beautiful and easy to see. One day the system had mockups for search results and the next day the results would be feeding from live data. The intrinsic reward of making progress and working with the team to deliver helped combat the potential for burnout.

It is unrealistic to deliver any project without going through some crunch time. Although progress helps to motivate employees during those periods, ineffective project planning can lead to an egregious amount of time where progress alone will not be enough to sustain the employees' motivation. If excessive crunch time continues to occur, the employees—the company's most valuable resource—should work to either change the organization or they will be compelled to move to a more supportive company. The books Peopleware and Slack: Getting Past Burnout, Busywork and Total Efficiency are great reminders on why we should work hard to take care of our teams.

Back to Top

Mark Guzdial's "The Impact of Open Source on Computing Education"

http://cacm.acm.org/blogs/blog-cacm/72144

We had a Georgia Tech alum, Mike Terry (now at Waterloo) visit us a couple weeks ago. Mike's research is on usability practices in open source. I got a chance to chat with Mike, and we talked about the impacts of open source on computing education, such as high school students getting started with computing by working in open source development. Overall, though, I came away concerned what the growth of open source development means for the future of computing education.

At a time when we are trying to broaden participation in computing, open source development is even more closed and less diverse than commercial software development. It is overwhelmingly white, Asian, and male. Some estimates suggest that less than 1% of open source developers are female.

Many kids and parents worry that all computer science jobs are being offshored and that it's not worth studying computing. As more and more of the software we use daily is created via open source development, I wonder if kids and parents will hear the message, "Most software developers work for free, or at least have to work for free for years before they can become professional and get paid for their work." Of course, that's not true. Neither is it true that all IT jobs are being offshored, but that's still what some people believe.

One of our challenges in computing education is convincing people that computing is broad and about more than programming. Open source values code above all, or as Linux's originator Linux Torvalds said, "Talk is cheap. Show me the code." We're trying to convince students that talk is also valuable in computing.

Finally, Mike's talk was about how common usability practices are rare in open source development. Of course, that's a concern in itself, but it's particularly problematic for newcomers. When students develop toward being professionals, they frequently engage in a process that educators call legitimate peripheral participation (LPP). It's LPP when you start out in a company picking up trash (doing something legitimate on the periphery), and in so doing, figure out what happens in the company. Students can get started in software development at a company by doing tasks that aren't directly about writing software, but are about the whole enterprise. These legitimate peripheral tasks serve as a stepping stone into the process, like writing documentation or running subjects in usability testing. If you don't have usability testing, you don't have that path into the process. Breaking into an open source development process is hard, and that keeps more students out than invites them in.

I wrote on this topic in my regular blog, and was surprised at the response. I learned that it is not acceptable to criticize religion, Santa Claus, or open source development—it's a "good" that should just be accepted as such. I disagree. Open source development does generate enormous good, but it could do more good if it improved its practices. It's hard to change open source development, because of its distributed nature. Open source developers should worry about the messages they send future developers, especially if they hope to grow and attract the development talent pool.

Back to Top

Daniel Reed's "Paucity to Plethora: Jevons Paradox"

http://cacm.acm.org/blogs/blog-cacm/72373

Those of us of a certain age remember when the university computer (note the singular) was a scientific and engineering shrine, protected by computer operators and secure doors. We acolytes extended offerings of FORTRAN, ALGOL, or COBOL via punched card decks, hoping for the blessings that accrued from a syntactically correct program that compiled and executed correctly.

The commonality across all our experiences was the need to husband computer time and plan job submissions carefully, particularly when one's job might wait in the queue for six to 10 hours before entering execution. I distinctly remember spending many evenings laboriously examining my latest printout, identifying each syntax error and tracing the program flow to identify as many logic errors as possible before returning to the keypunch room to create a new punched card deck.

Because computing time was scarce and expensive, we devoted considerable human effort to manual debugging and optimization. Today, of course, my wristwatch contains roughly as much computing power as that vintage university mainframe, and we routinely devote inexpensive computing time to minimize human labor. Or do we?

Yes, we routinely use WIMP interfaces for human-computer interaction, cellular telephony is ubiquitous, and embedded computers enhance everyday objects. However, I suspect much of computing is still socially conditioned by its roots in computational paucity to fully recognize the true opportunity afforded by computational plethora.

Many of us are wed to a stimulus-response model of computing, where humans provide the stimulus and computers respond in preprogrammed ways. In a world of plethora, computing could glean the work, personal, and emotional context, anticipating information queries and computing on behalf rather than in response. My computer could truly become my assistant.

In economics, the Jevons paradox posits that a technological increase in the efficiency with which a resource can be used stimulates greater consumption of the resource. So it is with computing. I believe we are just at the cusp of the social change made possible by our technological shift from computational paucity to computational plethora.

Back to Top

Authors

Ruben Ortega is an engineering director at Google.

Mark Guzdial is a professor at the Georgia Institute of Technology.

Daniel Reed is vice president of Technology Strategy & Policy and the eXtreme Computing Group at Microsoft.

Back to Top

Footnotes

DOI: http://doi.acm.org/10.1145/1785414.1785419


©2010 ACM  0001-0782/10/0700  $10.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2010 ACM, Inc.