acm-header
Sign In

Communications of the ACM

Review articles

EarSketch: Engaging Broad Populations in Computing Through Music


EarSketch, illustration

Credit: Garry Killian

"Like, I thought coding was going to be boring and kind of just make me super-mad. It was going to be like tragic. But now that I've taken this class and I've seen all the things I can do with EarSketch and how that can be applied—like the same general concepts can be applied and expanded on to all these other aspects and different fields—it kind of opened up and made me kind of rethink my career choices like, 'Oh, maybe I actually want to pursue something in like IT or computer science.' Normally, you have like a one-sided opinion or view of coding. You don't really see it as being something creative and so personable. ... It just kind of opened up your world, like broadened your horizons in seeing all the career fields that actually use coding and how that plays a role in it, versus like this stereotypical view of what coding is."

Back to Top

Key Insights

ins01.gif

This is a reflection from a high school student in an introductory computer science course. During a focus group, he discussed his changing perceptions of and interest in coding. This student's shift in perceptions about computing and its level of engagement, potential for creativity, and career relevance exemplifies the critical importance of students' early academic experiences with computing. In other words, an engaging and expressive introductory computing course can significantly impact students' intention to persist in the field.

EarSketch (Figure 1), the learning environment and curriculum this student used in his course, engages students by emphasizing the personally expressive role of computing in the domain of music. EarSketch students learn elements of computing and sample-based music composition (that is, composition using musical beats, samples, and effects). They write Python or JavaScript code to algorithmically create music in popular genres and use fundamental computing concepts such as loops, lists, and user-defined functions to manipulate musical samples, beats, and effects.

f1.jpg
Figure 1. The EarSketch learning environment includes a sound browser (left), code editor (center bottom), digital audio workstation (center top), and curriculum browser (right).

The computational thinking skills that underlie these activities have become central to how we create, communicate, experiment, evaluate, iterate, and innovate in the 21st century.39 Computer science is a core skill not only in a growing high-tech sector, but also for careers across many other domains; yet, computing is often seen by students as uncool20 and approaches to teaching it may be uninspiring.26 African American and Latino students, as well as women, are vastly under-represented in computing courses as compared to their male Caucasian and Asian counterparts. (Demographic data from the Advanced Placement Computer Science A exam10 clearly documents this trend.)

The integration of music into introductory computing education presents unique opportunities to engage students in the study of computing and to broaden participation in the field. Music is a ubiquitous part of human culture, with directly observable neurological foundations in the human brain.31 Students dedicate an enormous portion of their daily lives to music listening and sharing, and these activities play a crucial role in forming their cultural and social identity.2 A recent survey of high school students studying EarSketch, for example, reinforced the prevalence of music in students' lives: 59.8% of students reported spending three or more hours per day listening to music. Additionally, the rise of consumer-facing music software and apps, ranging from GarageBand to Magic Piano,16 has made computer-based music creation a ubiquitous practice—even for users without prior training in music or music technology.

In addition to music's potential use as a hook to engage broad student populations in computing, pedagogical connections between the two disciplines abound. Many musical concepts, structures, and processes map easily and naturally to computational thinking. For example, the abstraction of code segments into functions parallels the repetition—with variation—of phrases and sections of music, and the sequential representation of characters in a string mimics the encoding of rhythmic data in a drum sequencer.

Our work with EarSketch leverages this tremendous potential of combining music and coding and embraces two overarching design priorities:

  • EarSketch is designed to provide an "immediate opportunity to act"7 and to be musically expressive, even for students (and teachers) who have no previous background in either music or computing. Anyone can quickly begin making compelling music in Ear-Sketch with just a few lines of code and audio loops from an included library.
  • EarSketch is designed so students will perceive it to be authentic15,23,37 in both the computing and music domains. Its interface design and underlying functionality borrow heavily from standard music production and software development tools and practices. EarSketch uses programming languages that are pervasive in real-world computing practices. It also provides students with audio samples from popular genres created by well-known musicians that serve as the musical building blocks for their compositions.

In this article, we first summarize related work in music and computing. We then explain how we operationalize both the immediate opportunity to act and the perception of authenticity in a dual-domain—music plus computing—approach and we describe EarSketch's learning environment and curriculum within this framing. Finally, we summarize recent research findings with respect to these core ideas and to the impact of EarSketch on student engagement and intention to persist in computing.

Back to Top

Related Work

In algorithmic composition,12 musicians define a process that generates the musical events (such as, notes or sound objects) of a composition. This practice contrasts with writing a score, in which musicians directly and linearly specify the properties of each individual musical event.

The practice of algorithmic composition includes precomputing examples (for example, Mozart's Musical Dice Game18), early experiments with composition on computers (for example, Lejaren Hiller and Leonard Isaacson's ILLIAC Suite21), domain-specific languages for computer music currently in widespread use (Max32 and Supercollider28), and algorithmic features embedded in commercial music production software (Drummer in Apple's Logica). Innovative signal processing and machine learning algorithms also underpin core techniques in audio production such as auto-tuning, time-stretching, and source separation.41 Recent advances in machine learning have also inspired efforts to automate entire phases of the music creation process such as music generation (for example, Google's Magenta22) and audio mastering (Landrb), following from earlier work in both the analog (Electronium8) and digital (Experiments in Musical Intelligence9) realms.

Most learning environments for computer science support some music or audio functionality, dating back as far as Logo.14 Popular learning environments for computing like Scratch33 and Pencilcode6 can play back audio files as well as lists of musical pitches and rhythms. These tools have been used to create sophisticated musical systems and performances.35

Other educational programming environments have been designed specifically for music. For example, JythonMusic is a Python programming environment and curriculum that supports the creation of both musical scores and interactive systems.27 Sonic Pi is a Ruby-based programming environment and curriculum that focuses on live coding (that is, modifying code while it is executing to dynamically change the musical output).1

EarSketch draws inspiration from these and other creative coding projects that have demonstrated success in introductory computing education, but differs from prior work with respect to immediate opportunity to act and perceived authenticity.

Musical composition can be approached in terms of dimensions of musical objects (for example, pitch, harmony, timbre) as well as hierarchies of musical time (for example, microsound, sound object, and phrase).34 Most algorithmic composition environments, whether designed for educational or professional use, focus on either the sound object level (such as lists of notes in Bau et al.6 or Cope9), on the subsymbolic level (such as sound synthesis descriptors in McCartney28), or sometimes on both as in Puckette32 and Aaron.1 In contrast, EarSketch focuses primarily on the phrase level of music: students recombine audio loops—each several seconds in duration—to create a new composition. Remixing is the dominant compositional activity: programmatically arranging these loops in simultaneity and succession on a multi-track timeline and adding effects and automations to those tracks. In other words, EarSketch operates at a much higher level of abstraction than most other environments: it focuses more on working with longer sections of audio than on manipulating the individual musical elements that comprise each of those blocks.

By focusing on composition at this higher level of abstraction, EarSketch provides students an immediate opportunity to act in the musical domain. No prior experience reading music notation or playing an instrument is necessary, and EarSketch neither requires nor suggests that such skills are needed to make compelling music. While the curriculum does introduce key aspects of music technology (like multi-track editing and effects) and the basics of musical form and time, it avoids topics such as musical keys, scales, harmony, and notation. Because EarSketch is primarily taught within computer science classrooms, most teachers are not musicians and so there is no expectation that teachers have any background in music either. This high-level of abstraction in musical composition does sometimes constrain the modalities of creativity in EarSketch; compared to Manaris27 or Pyknon,c EarSketch's design encourages the creation of repetitive music while discouraging the creation of lower-level musical content from scratch (such as melodies).

EarSketch not only offers an immediate opportunity to act in terms of these low barriers to entry, but also in terms of the speed with which students can create music. Within an hour of learning EarSketch, students are already writing scripts that create complete musical songs and are iteratively extending and revising them based on their musical preferences and intuitions.

EarSketch is also designed to be perceived by students as authentic in both the computing and music domains. The authenticity of a learning experience, according to Lee and Butler,23 is based on the interrelated authentic learning practices of: having personally meaningful learning experiences; learning that relates to the world outside of the learning context; learning that encourages thinking within a particular discipline (for example, music composition); and allowing for assessment that reflects the learning process. Thick authenticity, according to Shaffer and Resnick,37 meets all of these requirements in a single approach/system. Guzdial and Tew15 argue that it is more important students perceive a learning experience to be authentic than that they learn in a manner that is completely consistent with real-world practices.

Students perceive EarSketch to be authentic across computing and music domains (related research findings are discussed later). Learning with EarSketch is personally meaningful to students who can create music in styles and genres that they like. EarSketch's use of popular programming languages, its reliance on multi-track audio editing paradigms in its interface and API design, and its library of sounds created by well-known musicians (as we will explore next) emphasize the relationships between the learning environment and real-world practices in both the computing and music industries. The Ear-Sketch curriculum builds upon these connections by incorporating appropriate computing and music skills and by assessing students through projects that further emphasize the real-world relevance of students' learning.

Unlike systems such as McCartney,28 Puckette,32 and Aaron,1 EarSketch is not intended for use by algorithmic music practitioners and researchers. EarSketch's focus on immediate opportunity to act, a high level of abstraction, and a connection to multitrack audio editing paradigms leads to a feature set that fully supports an introductory computer science curriculum. The design resulting from these priorities, however, precludes support for lower-level audio synthesis, signal processing, and symbolic music manipulation features that are common across programming environments designed specifically for musicians creating algorithmic music. Ariza3 provides a thorough overview of algorithmic computing environments designed for that distinct use context.

Here, we further describe the Ear-Sketch learning environment and curriculum in the contexts of immediate opportunity to act and of perceived authenticity.

Back to Top

Learning Environment

The EarSketch learning environment is a browser-based application that uses modern Web standards and the Web Audio APId to integrate a code editor, language interpreter, digital audio workstation (DAW), loop library, and curriculum viewer within a single-window interface.25 The interface (Figure 1) borrows common design cues from both IDEs and DAWs, such as a central code editor and audio view flanked by swappable sidebars for file management, sharing, and reference.

In the EarSketch code editor, students write code in Python or JavaScript, using either a text editor or a blocks-based visual code editor.5 Regardless of language or editor chosen, they use the same application-programming interface (API) to create music. The use of programming languages popular in real-world practice emphasizes the real-world dimension of authenticity, as well as the transfer-ability of skills to other computational domains and to other educational and career contexts.

The code editor in Figure 1 shows a simple EarSketch program in Python that incorporates a few of the most common API functions. The fitMedia() function on line 7 places an audio loop on a particular track at specified start and end times, repeating the audio as necessary to fill the requested duration. The setEffect() functions on lines 16 and 17 add effects to a track. Line 17 adds a delay (that is, recurring echo) effect, and line 16 adds a volume fade-in in the opening measures of the song.

The makeBeat() function on line 13 is one of the few API functions that works at a sound object (note) level instead of the audio loop (phrase) level. Despite this, it still provides students an immediate opportunity to act because of its focus on rhythm rather than pitch. Following the paradigm of a step or drum sequencer, it divides a measure of music into steps, with the musical contents of each step represented by a character in a string. In this manner, a student can easily create a rhythm with a string, listen to it, and iteratively modify it until they are satisfied with the musical result. All of these EarSketch functions emphasize the disciplinary dimension of authenticity, encouraging students to think in terms of the multi-track paradigm that is ubiquitous in music creation and production.

The EarSketch API includes additional functions for tasks such as analyzing audio for its amplitude or brightness, importing files and images to use as datasets, and manipulating the strings used in makeBeat(). Because EarSketch is targeted at introductory computing students, the API does not support advanced computational features (for example, deep learning), and support for signal processing is limited to using 16 predefined audio effects.

When users run their code, the results of execution are displayed in a digital audio workstation (DAW) panel that closely mimics the multitrack displays found in music production software (see the center top of Figure 1). Students can see and hear the results of code execution, control playback with transport controls, export their audio for use in other music software, or share it directly online. Unlike conventional DAWs, they cannot directly modify audio or effects in the graphical interface: they must accomplish this through changes in code.


EarSketch focuses primarily on the phrase level of music: students recombine audio loops—each several seconds in duration—to create a new composition.


EarSketch includes ~4000 prerecorded sound samples accessible via a sound browser sidebar. The sound browser pane mimics the functionality of similar interface panels in DAWs, allowing users to search and filter sounds by artist, genre, and instrument. Sounds are grouped into collections that contain loops in the same style and key and are designed to fit well together. By using loops within the same collection, novice users are easily able to create music that is stylistically, harmonically, and rhythmically coherent, even without knowledge of the music theory behind these elements. The collections, which cover a wide range of popular genres (for example, hip hop, dubstep, EDM, and pop), were created by sound designer and electronic musician Richard Devine and Young Guru (Figure 2), Jay-Z's Grammy-nominated audio engineer and DJ. Students can also upload their own sounds, record sounds directly within EarSketch, and import them from Freesound,e a large online collection of Creative Commons licensed sounds. This approach encourages students to identify musical genres and content that are personally meaningful to them and to incorporate this content into their own work.

f2.jpg
Figure 2. Audio engineer Young Guru, who created many of the sounds in the EarSketch loop library, reviews an EarSketch student's project.

An additional sidebar displays instructional materials for students, including text, runnable code examples, videos, multiple-choice questions, and slides. These are part of the Ear-Sketch curriculum for Computer Science Principles.

Back to Top

Curriculum

The EarSketch curriculum is aligned with the programming standards of the College Board's Advanced Placement (AP) Computer Science Principles (CSP) course, as well as a related course that is a standard in the state of Georgia. AP CSP was launched in the fall of 2016 with a goal to offer a rigorous introductory curriculum that broadens participation in computer science. The course introduces students to the creative aspects of programming, abstractions, algorithms, large datasets, the Internet, cybersecurity, and the impacts of computing across multiple domains.4

We aligned the EarSketch curriculum to CSP because of a shared goal of broadening participation in computing through a creative and authentic approach. CSP's curricular framework is broader than traditional computer science courses, with a focus on collaboration, analysis, communication, creativity, and connections to other disciplines. In contrast to other introductory computing courses, CSP is language agnostic. It does not mandate a specific programming language or problem domain: students submit performance tasks created with a programming language/environment of their choice, and they take a language-agnostic end-of-course exam. This all facilitates integrating EarSketch into the course.

The EarSketch curriculum for CSP consists of a ~12-week module within the course that covers all of the CSP learning objectives related to programming and many of the objectives for creativity, abstraction, and algorithms. While we could have created a full-year CSP curriculum focused exclusively on music and EarSketch, we believe students should be exposed to multiple domains that are impacted by computing in addition to music; therefore, the EarSketch curriculum can readily be combined with curricula from Code. orgf or Beauty and Joy of Computingg to implement a full-year CSP course.

The EarSketch CSP module is organized into three units. Each unit has an authentic challenge that requires the student to code musical concepts to satisfy the musical and technical criteria of the challenge. As an example, the first challenge requires the student to select a client that could be a business in their community or a school organization. The student must develop a 10- to 15-second musical introduction for a client advertisement that applies research shared with the student on how tempo and pitch affect mood. In addition, the student must apply musical effects, like volume fades or pitch shifts, to help create this mood.

Students share their music and code with their classmates and teacher to see if the intended mood is elicited and also discuss their code. Based on the constructive feedback they receive, they then iterate on their creation, share with their client to receive additional feedback, and further iterate to reach a final product. Students also use a rubric check sheet and write a justification of how their programming artifact fulfills the technical and artistic requirements of the project.

In open-ended projects such as this challenge, there is no single correct solution for an assignment. Students must collaborate and communicate with their classmates, their teacher, and external partners to iteratively refine the project goals, assess work in progress, and devise new musical and computational strategies to address feedback. The EarSketch CSP module follows this studio-based learning (SBL)19 approach across all three units: designing an artifact; presenting work to peers and teachers, along with a detailed justification of the decisions made; discussing the work of peers and offering constructive questions and feedback; and revising work based on the feedback of others.

Many computing teachers are unfamiliar with this approach and are also new to teaching CSP and to the domain of music. We have thus developed scaffolding and support for teachers in three areas: teaching materials that include day-by-day lesson plans, slides, worksheets, mini-tasks, videos, project descriptions and rubrics, assessments, and integration guides; face-to-face and online professional development that introduces teachers to EarSketch, the curriculum, and these new pedagogical practices; and, a community where teachers can ask questions, share materials, and review additional training resources in both an online website and a series of in-person events.

Back to Top

Findings

We have measured pre-to-post changes in EarSketch students' attitudes toward computing, primarily in CSP courses. Like other CS educational interventions, EarSketch seeks to generate pre-to-post gains in students' CS content knowledge and has significantly done so across multiple research studies, with effect sizes that place learning in Hattie's17 zone of 'typical teacher effects' and in the 'zone of desired effects.'

From interest to persistence in computing. EarSketch seeks not only to engage students in computing, but also to motivate students to persist in computing after the course. CSP teachers using EarSketch have compared it to other CS learning platforms and appreciate that EarSketch allows students to create artifacts interesting to students and to do so quickly. In an interview with our team, one teacher said:

"Well, we were doing [platform], if you're familiar with it, it's read, read, read, and do little things, but it wasn't real hands-on. ... So, when I put them on EarSketch, it was like, 'Whew!' They really got it. Where I've been teaching all these concepts that don't mean anything to you until you do them. So, Ear-Sketch really implemented everything we'd kind of gone over up to that point and then some."

While building interest in computer science is important, teachers also describe the work they do to move students from initial interest to an intention to persist in further CS-related study. In particular, building interest among students with little initial interest or understanding of CS is challenging. A teacher whose class is comprised mainly of students who did not select the course shared:

"I tend to have students who are either placed in the course and have no idea what [CS] is, or just have absolutely no experience in programming. So, I definitely think EarSketch levels the playing ground. They're able to find something interesting about programming. They all like music. I think they thought it was fun, and I think it's engaging for them."

Another teacher explains that students who might not have persisted are signing up for AP Computer Science A (the course that typically follows CSP):

"As a result of using EarSketch, they're a lot more confident, and many of them have signed up for AP Computer Science [A] when they would not have before. Because now they feel like, 'Yeah, I can do this. I'm not afraid of programming. I'm not afraid of doing an actual language.'"

We have conducted several quantitative EarSketch studies that explore students' intention to persist in computing11,24,29,38 through retrospective pre-post surveys. Three of the studies focused on high school students in introductory computing courses; the fourth study38 focused on undergraduate students taking an introductory programming course to fulfill a computing requirement for non-majors.

Collectively, the studies included over 500 students in eight different institutions across four different academic years. Each of the four studies show statistically significant pre-to-post increases in intention to persist as well as in students' attitudes toward computing, typically with medium or large effect sizes. In Magerko et al.24 female students expressed greater pre-to-post change across all attitudinal constructs and intention to persist than male students, with significantly greater gains in confidence, motivation, and identity. In that same study, a comparison of under-represented minority and majority students showed that both groups demonstrated significant increases in all attitudinal constructs and intention to persist, and that there was no significant difference between minority and majority student growth. In Siva et al.,38 students in treatment sections of the course using EarSketch had significantly larger pre-to-post gains in intention to persist than students in comparison sections that did not use EarSketch.

We hypothesize that intention to persist may be activated by students' attitudes toward computing and that meaningful CS learning experiences might shift students' attitudes. Therefore, we explored the factors that contribute to students' increases in intention to persist. In McKlin et al.,29 we conducted a path analysis to analyze student data in the context of this hypothesis. We found that students' identity as a computer scientist (such as, beliefs, expressions, and behaviors that motivate a person to align with or relate to a group) significantly predicts their intentions to persist in computing.

Authentic learning environment to foster identity. We theorize that EarSketch may support the growth of computing identity among students who typically do not pursue computing by providing a learning environment that students perceive to be thickly authentic.15,23,37 One teacher offers:

"They can see the benefit of what they're learning, that real-world connection right away. I think that's what's so beneficial with using EarSketch, because they can see it."

Another teacher explains that EarSketch is meaningful to students because they are building music:

"[EarSketch] takes all that stuff they've learned and puts it into a hands-on audio, visual concept. It just makes so much sense to them once they hear it and see it. It's not just making that music play. It's 'how do I make that music play?' They got that day one."

In three of our studies,11,29,38 we conducted a path analysis of student data to understand the relationship between perceived authenticity (called "Creativity Place" in Engelman et al.11), student attitudes, and intention to persist. In each study, authenticity consistently and significantly predicts positive changes in students' identity as a computer scientist along with positive changes in confidence, enjoyment, importance/usefulness, motivation, and personal creativity. While authenticity does not directly predict students' intention to persist, it does significantly predict attitudinal factors (such as identity) that in turn predict students' intention to persist.

Personal creativity. In our research, personal creativity includes characteristics of students who engage in a creative endeavor with computing and includes expressiveness, exploration, immersion, originality, sharing, and creative thinking skills. In our recent study examining the relationship between personal creativity and students' intention to persist in computing,29 we found one aspect of personal creativity stands out: sharing. That is, sharing computing work among family and friends is more likely to predict students' intentions to persist in computing than any other factor of creativity.

A teacher explains how students share the technical and practical aspects of the work with family and friends:

"[They are] able to create something. It's theirs. They can show it to their friends. A lot of them like that aspect of it, that they're able to show them, 'Hey, I actually made this.' ... They will say, 'Hey, this is what I made in EarSketch. This is similar to what you just made in GarageBand.'"

Ecosystem analysis. We have looked beyond student-level outcomes and have examined the school and classroom-level implementation of EarSketch. Because student learning is situated in a classroom and is affected not only by the teacher and students—but also by the school and district's infrastructure, capabilities, and culture—models of this larger ecosystem have enabled us to better understand what is needed to successfully implement and sustain EarSketch.

For example, based on observation data, our models captured a phenomenon known as the 'virtuous' and 'vicious' cycle of engagement with EarSketch. In the 'vicious' cycle, a student uses brute force repetition of more basic computational structures and avoids more algorithmic thinking, thus composing music without advancing computing knowledge. The 'virtuous' cycle, by comparison, is where students continue to develop both musical and computational skills side by side throughout the EarSketch course module. The models of these virtuous and vicious engagement cycles enabled the EarSketch team to address this common issue in creative computing learning platforms through a combination of new content in teacher professional development sessions and more scaffolding of student projects.30

Back to Top

Emerging Work

We have begun to cultivate communities of EarSketch students and teachers through both digital and physical means. We recently introduced sharing and collaboration tools to EarSketch for script sharing and collaborative editing, as well as live coding and time synchronization tools that help EarSketch users perform together.36 We have staged EarSketch competitions to recognize exemplary projects and to help students discover the connections between algorithmic composition and music production, entrepreneurship, writing, and visual art.

Our team has also partnered with Northwestern University and the Museum of Science and Industry in Chicago to create a larger ecosystem of music plus computing tools that target informal educational settings, including a collaborative tabletop system for museums40 and a tablet environment targeted for use in home and workshop settings.13

We have also begun to explore the potential of deep learning to tackle persistent design challenges in the creative computing education space. These tasks include auto-grading open-ended computing assignments (that is, using a large corpus of code to train a system to recognize evidence of specific content knowledge in student projects); providing students with real-time assistance in debugging code; and training co-creative learning companions that can provide both technical and creative ideas to students as they work.

Back to Top

Conclusion

In a recent focus group, a CSP student learning with EarSketch said:

"Before I thought coding was kind of like ... Not necessarily evil, but something that you pretty much had to do. ... So, I thought if maybe I knew a little bit I'd be something in the future. But now I actually want to do it, not because it will probably benefit me someday, but because it's also fun and engaging."

Her reflections exemplify the impact we hope to achieve with EarSketch. Students may choose to study computing because of the growing demand for these skills in the workforce, because of external pressure, or because they believe they are good at it. EarSketch, with its design that emphasizes dual-domain authenticity and immediate opportunity to act, offers a pathway for students to enjoy computing, to find it fun and engaging, and to want to pursue it simply because they love it.

Back to Top

Acknowledgments

EarSketch receives funding from the National Science Foundation (CNS #1138469, DRL #1417835, DUE #1504293, DRL #1612644, and IIP #1741045), the Scott Hudgens Family Foundation, the Arthur M. Blank Family Foundation, the Ruth L. Seigel Family Foundation, and the Google Inc. Fund of Tides Foundation. Any opinions, findings, and conclusions or recommendations expressed in these materials are those of the author(s) and do not necessarily reflect the views of the National Science Foundation or other funders. Many thanks to the members of the EarSketch research team—a full list is at http://earsketch.gatech.edu.

Back to Top

References

1. Aaron, S. et al. The development of Sonic Pi and its use in educational partnerships: Co-creating pedagogies for learning computer programming. J. Music, Technology & Education 9, 1 (2016), 75–94.

2. Abrams, D. Social identity on a national scale: Optimal distinctiveness and young people's self-expression through musical preference. Group Processes & Intergroup Relations 12, 3 (May 2009), 303–317; https://doi.org/10.1177/1368430209102841.

3. Ariza, C. Navigating the landscape of computer aided algorithmic composition systems: A definition, seven descriptors, and a lexicon of systems and research. In Proceedings of the 2005 Intern. Computer Music Conference (Barcelona, 2005).

4. Astrachan, O. et al. CS principles: Piloting a new course at national scale. In Proceedings of the 42nd ACM Tech. Symp. on Computer Science Education. ACM, New York, NY, USA, 2011, 397–398.

5. Bau, D. Droplet, a blocks-based editor for text code. J. Computing Sciences in Colleges 30, 6 (2015), 138–144.

6. Bau, D. et al. Pencil code: Block code for a text world. In Proceedings of the 14th International Conference on Interaction Design and Children (2015), 445–448.

7. Carroll, J. Minimalism Beyond the Nurnberg Funnel. MIT Press, Cambridge, MA, 1998.

8. Chusid, I. Beethoven-in-a-box: Raymond Scott's electronium. Contemporary Music Review 18, 3 (Jan. 1999), 9–14; https://doi.org/10.1080/07494469900640291.

9. Cope, D. The Algorithmic Composer. A-R Editions, Inc., 2000.

10. Detailed Race and Gender Information 2017; http://home.cc.gatech.edu/ice-gt/599.

11. Engelman, S. et al. Creativity in authentic STEAM education with EarSketch. In Proceedings of the 2017 ACM SIGCSE Tech. Symp. on Computer Science Education. ACM, New York, NY, 183–188.

12. Essl, K. Algorithmic composition. The Cambridge Companion to Electronic Music. N. Collins and J. d'Escrivan, eds. Cambridge University Press, 2007, 107–124.

13. Gorson, J. et al. TunePad: Computational thinking through sound composition. In Proceedings of the 2017 Conference on Interaction Design and Children. ACM, New York, NY, 484–489.

14. Guzdial, M. Teaching programming with music: An approach to teaching young students about logo. Logo Foundation, 1991.

15. Guzdial, M. and Tew, A.E. Imagineering inauthentic legitimate peripheral participation: An instructional design approach for motivating computing education. In Proceedings of the 2nd Intern. Workshop on Computing Education Research. ACM, New York, NY, 2006, 51–58.

16. Hamilton, R. et al. Social composition: Musical data systems for expressive mobile music. Leonardo Music J. 21, 1 (Nov. 2011), 57–64.

17. Hattie, J. Visible Learning for Teachers: Maximizing Impact on Learning. Routledge, 2012.

18. Hedges, S.A. Dice music in the 18th Century. Music & Letters 59, 2 (1978), 180–187.

19. Hendrix, D. et al. Implementing studio-based learning in CS2. In Proceedings of the 41st ACM Tech. Symp. on Computer Science Education. ACM, New York, NY, 2010, 505–509.

20. Hewner, M. and Knobelsdorf, M. Understanding computing stereotypes with self-categorization theory. In Proceedings of the 8th Intern. Conference on Computing Education Research (2008), 72–75.

21. Hiller, L.A.J. and Isaacson, L.M. Experimental Music: Composition with an Electronic Computer. McGrawHill, 1959.

22. Jaques, N. et al. Generating music by fine-tuning recurrent neural networks with reinforcement learning. In Proceedings of Deep Reinforcement Learning Workshop, NIPS (2016).

23. Lee, H.-S. and Butler, N. Making authentic science accessible to students. International J. Science Education 25, 8 (2003), 923–948.

24. Magerko, B. et al. EarSketch: A STEAM-based approach for underrepresented populations in high school computer science education. ACM Trans. Computing Education 16, 4 (2016).

25. Mahadevan, A. et al. EarSketch: Teaching computational music remixing in an online Web audio-based learning environment. In Proceedings of the 1st Annual Web Audio Conference (Paris, 2015).

26. Mahmoud, Q.H. Revitalizing computing science education. IEEE Computer 38, 5 (2005), 98–100.

27. Manaris, B. and Brown, A.R. Making Music with Computers: Creative Programming in Python. CRC Press, 2014.

28. McCartney, J. Rethinking the computer music language: SuperCollider. Computer Music J. 26, 4 (Dec. 2002), 61–68; https://doi.org/10.1162/014892602320991383.

29. McKlin, T. et al. Authenticity and personal creativity: How EarSketch affects student persistence. In Proceedings of the 49th ACM Tech. Symp. on Computer Science Education. ACM, New York, NY, 2018, 987–992.

30. Moore, R. et al. STEAM-based interventions in computer science: Understanding feedback loops in the classroom. In Proceedings of the 2017 ASEE Annual Conference & Exposition. (June 2017).

31. Peretz, I. and Zatorre, R.J. Brain organization for music processing. Annual Review of Psychology 56, (2005), 89–114.

32. Puckette, M. Combining event and signal processing in the MAX graphical programming environment. Computer Music J. (1991), 68–77.

33. Resnick, M. et al. Scratch: Programming for all. Commun. ACM. 52, 11 (Nov. 2009), 60–67.

34. Roads, C. Microsound. MIT Press, Cambridge, MA, 2004.

35. Ruthmann, A. et al. Teaching computational thinking through musical live coding in scratch. In Proceedings of the 41st ACM Techn. Symp. on Computer Science Education (2010), 351–355.

36. Sarwate, A. et al. Collaborative coding with music: Two case studies with EarSketch. In Proceedings of the Web Audio Conference (Berlin, 2018).

37. Shaffer, D.W. and Resnick, M. 'Thick' authenticity: New media and authentic learning. J. Interactive Learning Research 10, 2 (1999), 195–215.

38. Siva, S. et al. Using music to engage students in an introductory undergraduate programming course for non-majors. In Proceedings of the 49th ACM Tech. Symp. on Computer Science Education. ACM, New York, NY, 2018, 975–980.

39. Wing, J.M. Computational thinking. Commun. ACM. 49, 3 (Mar. 2006), 33–35.

40. Xambó, A. et al. Experience and ownership with a tangible computational music installation for informal learning. In Proceedings of the 11th Intern. Conference on Tangible, Embedded, and Embodied Interaction. ACM, New York, NY, 2017, 351–360.

41. Zölzer, U. DAFX: Digital Audio Effects. John Wiley & Sons, 2011.

Back to Top

Authors

Jason Freeman ([email protected]) is a professor and chair of the School of Music at Georgia Institute of Technology, Atlanta, GA, USA.

Brian Magerko ([email protected]) is a professor and director of Graduate Studies in Digital Media at Georgia Institute of Technology, Atlanta, GA, USA.

Doug Edwards ([email protected]) is a research associate for the Center for Education Integrating Science Mathematics and Computing at Georgia Institute of Technology, Atlanta, GA, USA.

Tom McKlin ([email protected]) is director at The Findings Group, Decatur, GA, USA.

Taneisha Lee ([email protected]) is lead evaluator at The Findings Group, Decatur, GA, USA.

Roxanne Moore ([email protected]) is a senior research engineer for the Center for Education Integrating Science Mathematics and Computing at Georgia Institute of Technology, Atlanta, GA, USA.

Back to Top

Footnotes

a. https://www.apple.com/logic-pro/

b. https://www.landr.com/

c. http://kroger.github.io/pyknon/

d. https://www.w3.org/TR/webaudio/

e. https://freesound.org

f. https://studio.code.org/courses/csp-2018

g. https://bjc.edc.org/bjc-r/course/bjc4nyc.html


Copyright held by authors/owners. Publication rights licensed to ACM.
Request permission to publish from [email protected]

The Digital Library is published by the Association for Computing Machinery. Copyright © 2019 ACM, Inc.


 

No entries found