acm-header
Sign In

Communications of the ACM

Viewpoints

When Technologies Manipulate Our Emotions


When Technologies Manipulate Our Emotions, illustration

Credit: Facebook / Flickr

On those rare occasions when an academic research study receives frenzied media attention, it usually indicates the topic has touched on societal fears in some way. On June 17, 2014 the Proceedings of the National Academy of Sciences (PNAS) published a paper by researchers at Facebook and Cornell University presenting evidence for widespread emotional contagion in online social networks. The Facebook study manipulated the percentage of positive or negative posts appearing in Facebook user news feeds and reported that their manipulations had the anticipated effects of demonstrating emotional contagion at scale. The study contributes interesting results to an underserved area of research, but it triggered an understandable flurry of concern because of a failure to obtain consent from participants before attempting to influence their emotions. Public concern was also intensified by the fact that most people were unaware that Facebook already filters user news feeds (by necessity due to scale), that the company has such unprecedented reach, and that the manipulation involved something as personal as feelings and emotions.

The lack of awareness regarding information filtering supports an illusion of neutrality regarding technology design—the notion that computer programs are bereft of values or moral intent by default. This and other important issues have received less attention amid the flurry of criticism pertaining to research ethics (most of the 120 papers Google Scholar identified as citing the study focused on ethics). Some open issues include the lack of transparency given restrictions to data access and the difficulty in systematic replication due to a lack of Facebook-level resources. This Viewpoint takes a different approach by discussing the implications of the study on technology design and the criteria on which technologists should base their design decisions.

The Facebook-Cornell study emerged from an attempt to better understand widespread emotion contagion in social networks. The authors experimentally studied how two filtering algorithms influenced the emotional expressions of a large number of users (N=689,003) by manipulating the likelihood of positive or negative posts appearing in the users' news feeds. They then studied the emotional content of users' status updates, which were ostensibly influenced by the emotional content in their news feeds. Emotion feelings were measured by the inclusion of emotional terms in the posts, computed with the Linguistic Inquiry and Word Count system (LIWC)5 that provides psychologically grounded lists of positive and negative emotional terms (amongst other categories). After a week, those in the positivity-reduced condition (for whom the number of positive posts was reduced) used fewer positive (0.1%) and more negative (0.04%) emotional terms compared to a control condition where a similar proportion of posts were reduced at random (that is, without respect to emotional content). In contrast, when negative posts were reduced (negativity-reduced condition), users used more positive (0.06%) and fewer negative (0.07%) emotional terms compared to the control condition. The authors' interpretation of these findings was that users felt more negative and positive emotions in the respective conditions—thus emotion contagion occurred.

If we accept the authors' conclusions that emotion contagion did in fact occur (see the accompanying sidebar), this leads us to a larger question for technology: Can design ever be emotionally neutral and if not, on what criteria should technologists base design decisions? Beyond the controversy surrounding the way the Facebook study addressed informed consent, it is important not to miss the valuable contribution made by this study. This study contributes to a critical area of modern inquiry: How does digital experience and the design of it affect our emotions? This is significant because one issue neglected in the media discourse is that design, be it of a filter, interface, or algorithm, is arguably never neutral. For example, newspapers use editorial guidelines to filter what information is published and search engines use algorithms to make these decisions. Every design decision must be based on some criteria. As researchers in Values-sensitive Design have made clear, the values and goals of designers and stakeholders will shape the design of any technology.3 Thus, the obligation to understand the impacts of our design decisions, and to be transparent about what influences them, becomes imperative.

If design is known to affect emotions, above and beyond this particular study on emotion contagion (see Calvo and Peters1 for a list of examples), how can we study these effects and how should we apply the knowledge gained? If software design is not neutral, how should a software designer make decisions about what is a "good" design—particularly when we are designing interfaces so closely linked to what we care most about: family, friends, and relationships? For instance, if it is in fact impossible for Facebook not to filter information due to scale, how should the filter criteria be determined? Should filters only ever be randomized and not optimized for the user experience? Or can we look deeper and seek to support greater transparency and user autonomy; what if designers allowed users to make more of these decisions themselves? For example, what if users could set the parameters for their news feed filter or aspects of their search algorithms on their own? Transparency and autonomy seem underexplored opportunities for respecting individual differences and safeguarding against paternalism or misuse.

We also posit that, as we look for criteria upon which to base technology design decisions, we should be turning to the research on psychological well-being—that design decisions should seek to promote (rather than hinder) thriving (an area we call Positive Computing1). It is important to note that well-being is not defined simply as an increase in positive emotions. Other determinants include empathy, compassion, self-awareness, engagement, autonomy, and connectedness according to research in psychology. Negative emotions are also an important component of lasting well-being.2 In this view, empathizing with a friend in need or receiving that empathy, may contribute more to one's well-being than merely positive expression. Clearly, emotional impact represents a rich, nuanced, and complex space of inquiry into which we are still only scratching the surface.

Back to Top

Conclusion

There is still much left to be understood about how our emotional lives play out in digital experience and how the design of systems, interfaces, and interactions shape our emotional experience. By publishing studies like the one mentioned here, companies are helping contribute important knowledge, not just to the academic community, but also to those who care about the psychological impact of technology. We believe the controversy over the Facebook study is a useful reminder of how important it is to uphold ethical guidelines in research, and the important role technology plays in our emotional experience. However, we hope it will encourage rather than deter further research into understanding ourselves better and understanding how we, as computing professionals, can make design decisions that are of optimal benefit to society.

Back to Top

References

1. Calvo, R.A. and Peters, D. Positive Computing: Technology for Wellbeing and Human Potential. MIT Press, 2014.

2. Fredrickson, B.L. and Losada, M.F. Positive affect and the complex dynamics of human flourishing. The American Psychologist 60, 7 (2005), 678.

3. Friedman, B., Kahn, P.H., and Borning. A. Value sensitive design and information systems. In K.E. Himma and H.T. Tavani, Eds., The Handbook of Information and Computer Ethics. Wiley, 2008, 69–101.

4. Kramer, A., Guillory, J.E., and Hancock, J.T. Experimental evidence of massive-scale emotional contagion through social networks. In Proceedings of the National Academy of Sciences (2014): 201320040.

5. Pennebaker, J.W. The Development and Psychometric Properties of LIWC2007. Austin, TX, LIWC, 2007.

Back to Top

Authors

Rafael A. Calvo ([email protected]) is a professor and the co-director of the Software Engineering Group at the University of Sydney, Australia.

Dorian Peters ([email protected]) is Creative Leader, Web and Interface Design for Learning, at the University of Sydney, Australia.

Sidney D'Mello ([email protected]) is an assistant professor in the Departments of Psychology and Computer Science and Engineering at the University of Notre Dame, Notre Dame, Indiana.

Back to Top

Footnotes

Rafael A. Calvo is supported by Australian Research Council Future Fellowship. Sidney D'Mello is supported by the U.S. National Science Foundation (NSF).

Back to Top


Copyright held by authors.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2015 ACM, Inc.


 

No entries found