Robots are fast becoming a part of everyday life. Indeed, robots are now deployed in retail stores (see Figure 1), warehouses, hospitals, factories, and so on to perform tasks conventionally done by humans. Nestlé uses a humanoid robot "Pepper" to sell coffee makers in department stores in Japan; people buy ice cream from a fully automated ice cream franchise, RoboFusion; Cobalt's KnightScope security robots patrol streets in New York City. Such encounters will only increase as the global market for service robots has grown exponentially, from $36.2 billion in 2022 to $103.3 billion by 2026.26
Figure 1. Humanoid 'Pepper' has been deployed in many retail stores throughout Japan.20
According to a survey from McKinsey Global Institute, 15% of the global workforce, or 400 million workers, will be displaced by 2030.14 Approximately 45% of the workforce in manufacturing, 37% in retail, 25% in hospitality, 23% in social work, and 10% in education might be replaced by artificial intelligence (AI) in six years.34 By 2020, automation was expected to displace 75 million jobs while creating 133 million jobs.7 Automation in general can help grow business and often generate more jobs. For example, Wing Enterprises, a ladder manufacturer in Utah, built a new automated facility that increased its productivity by 30%, which subsequently helped the company expand from 20 to 400 employees.14
Creating robots that perform jobs traditionally done by humans is the goal of many robotics engineers.11 This aspiration to replicate human cognition and behavior has led to some success in developing robots capable of performing human tasks such as sales and teaching.16 The growing trend of designing robots to resemble humans was initially viewed with excitement. However, when robots started to look and behave human 'enough' to threaten the human identity, people's opinion of robots shifted, and this phenomenon is referred to as the 'uncanny valley effect.'24
Public sentiment about humanoids has been largely divided. Some people treat anthropomorphic robots like they are human acquaintances.19 Others feel stressed and anxious about anthropomorphic robots and see them as a threat to job security and human identity.19,27 Overall public sentiment about robots has been negative. One of the main concerns is that robots make us less human.27 Ironically, previous research suggests the opposite may be true. Robots can help humans grow socially and emotionally if they resemble us more.13,31
Some people think anthropomorphic robots are more competent, trustworthy, and fun to interact with.13,31 People evaluate their human-robot interaction (HRI) experience more positively and are more tolerant of errors if robots are humanlike.31 Robots have been more effective in helping children with autism spectrum disorder (ASD) improve their social skills.32 Anthropomorphizing a robot might influence HRI positively or negatively depending on moderating factors such as different types of anthropomorphism, congruence in user expectations, HRI contexts, human features, and moral agency.
Robotics design involves multifaceted areas of robotics where the design flexibility is subjugated to the functional aspects of a robot.32 In the context of HRI for commercially available devices, robots are finite projects that have little room for meaningful physical alterations.12 Robotics design is often susceptible to a tight set of requirements and conditions that must be fulfilled.
Complex machinery products, such as automobiles, are also often limited in their design flexibility. Anthropomorphic design is one common strategy for car makers to draw consumers to their products. Car fronts are often designed to resemble a human face, and this anthropomorphic design is linked to improved product evaluation, such as better ratings for functionality or stronger product attachment.1 Inronically, when slot machines are designed to look more anthropomorphic, people tend to bet less compared to a typical slot machine.23 Previous research suggests the positive effect of product anthropomorphism on the user experience can be translated into the HRI context.
Previous research has studied customer reviews of interacting with a concierge robot in a fully automated hotel in Tokyo, Japan, and found two main reasons why a human agent was preferred: People found it more difficult to interact with a robotic concierge and they found/felt it less competent.2 Anthropomorphic design of a robot can improve HRI in terms of these two most common reasons why some people might be averse to robots. Anthropomorphic robots can improve HRI by encouraging a favorable evaluation of the robot in terms of its efficacy and motivation to interact with it (see Figure 2).
Figure 2. Anthropomorphic design can positively influence perception about robots, social motivation to interact with the robot, and cognitive responses to the robot such as trust level.
Perception. People believe only humans strive to prove their competence, and this intrinsic notion is referred to as effectance motivation—that is, the belief in the superior competence of humans over non-human creatures or objects.15 People are susceptible to confirmation bias and seek evidence partial to our beliefs and expectations.21 Thus, product anthropomorphism often improves user evaluation because people assign effectance motivation to anthropomorphic objects and believe that anthropomorphic products should function better.15 Research shows that when computers or even slot machines are anthropomorphized in appearance, people assume the machines have effectance motivation that is partial to humans and subsequently expect them to function better.15,23 People likely find anthropomorphic robots more competent compared to robots that do not resemble humans. However, when people see robots as a threat, anthropomorphism might make robots appear more foreboding and subsequently exacerbate HRI.
Sociability. Anthropomorphic design might improve the perceived sociability of the robot. Research suggests that people tend to treat anthropomorphized objects as if they are human acquaintances and initiate social interactions with them.19 People tend to apply the same social norms when they interact with anthropomorphic objects.40 In this aspect, SoftBank's humanoid NAO is deployed in hospitals to help children with ASD learn social skills, and the results are promising.13 Children who interacted with humanoids picked up social cues effectively and applied them when they interacted with peers.13
Furthermore, people prefer anthropomorphic robots more when they are lonely,37 indicating such robots can better address people's social needs.37 In fact, spending time with anthropomorphic objects was shown to lower the sense of loneliness.30 Neurophysiological measures suggest the segment of the brain that guides compassion becomes more active not only around other humans but also around anthropomorphic robots.19 These findings provide important insights into HRI because anthropomorphic design might improve people's motivation to interact with the robot, learn about the robot, and more willingly overcome the barrier to interacting with the robot.
Response. Another reason why some people dislike anthropomorphic robots is they find it difficult to 'trust' robots.39 The confounding factor behind this trust issue is effectance motivation.15,39 People prefer humans over robots because they believe that humans are competent to fix their mistakes and deliver the requested out-come.15
However, previous research suggests that people assign human characteristics, including effectance motivation, to objects if they are anthropomorphized.15 Research shows that humanizing a computer not only improves people's evaluation of its efficacy but also the level of trust people have in the computer because they expected effectance motivation.40 People were more likely to trust a broken computer to repair itself if the computer was anthropomorphized.40 Similarly, people were more tolerant of errors made by anthropomorphic robots because it was expected that robots would fix their errors.10
Robot anthropomorphism might contribute to positive HRI, however, the effect is not likely to be straightforward. Humanlike robots might provoke increased levels of anxiety and stress.27 Anthropomorphic design of a robot, while it can be an external stimulant to make user experiences more amusing for some,33 can also be a source of anxiety for others.27 Thus, robot anthropomorphism is a double-edged sword. Future research might investigate conditions that moderate the effect of robot anthropomorphism on HRI. The Anthropomorphic roBot (ABOT) database—a collection of real-world anthropomorphic robots created for research or commercial purposes—can be a great resource for not only appearance but also the robot's name and locomotion can influence the degree of robot-human features.32 ABOT classifies robots based on the degree of anthropomorphism of varying types. Future research can utilize this database for stimuli with specified types and salience of robot anthropomorphism to study their effect on HRI.
Previous research suggests that humans inadvertently utilize the heuristics from human-human interactions to make HRI judgments.9 People develop varied expectations and beliefs about a robot depending on its voice and/or facial schema. People expect congruence in a robot's appearance and other characteristics such as voice.28
Also, people might have varied expectations of how robots should behave depending on the situation. People might expect emotional expressions from a robot in a restaurant but not quite in a computer repair shop. Thus, the effect of robot anthropomorphism on HRI should be subject to the situational context. Congruence in user expectation and robot anthropomorphism can vary by types such as facial schema, voice, verbal/nonverbal communication, and user expectation based on robot characteristics and situations (as illustrated in Figure 3).
Figure 3. User reaction to different dimensions of robot anthropomorphism including robots' emotional expression, functionality, facial schema, and uncanny valley salience.
Facial schema. Research also suggests a robot's facial schema can influence user engagement.8 A cute-looking facial schema, such as a baby face, can make people more attached to the product and tolerant of product failures.8 The baby-face schema indicates a set of infantile facial traits that normally elicit positive attitudes and caretaking behavior.4 Since anthropomorphic robots can activate heuristics and motivations partial to human-human interactions, a baby-like robot might encourage people to view the robot more positively and be tolerant of its errors.
Conversely, there can be potential downsides to using the baby-face schema on a robot. One heuristic knowledge associated with babies is they are incompetent. People might think that robots are incompetent if they look like a baby.25 Thus, a robot's facial schema can have varied impacts on HRI depending on the context. Robot's facial schema might influence HRI positively if it is congruent with the purpose of the robot. For example, the baby schema facial trait of a caretaking robot might improve HRI, while it can be a negative factor for a sanitization robot.
Emotional expression. People are more emotionally engaged with a robot if it is anthropomorphic.6 Moreover, people also expect anthropomorphic robots to be more emotionally expressive, alive, and sociable compared to relatively less anthropomorphic robots.10 Emotional expressions of a robot might be seen as a positive trait that can improve HRI.
However, findings on the effect of robots' emotional expressions on HRI are rather mixed. Emotional expressions of a robot might make people feel uncomfortable about interacting with it.27 The congruence between user expectations and the robot's emotional expressions can be a factor that can explain, consciously or unconsciously, how underlying beliefs are influenced.3 Many of the heuristics and expectations people develop in human-to-human interactions can inadvertently affect how they evaluate anthropomorphic robots.40 Interacting with humanoids is still new to most people, and subsequently, incongruence in robot behavior and people's expectations can trigger an adverse reaction. For example, a robot's emotional expressions in a high-contact service situation such as an upscale restaurant are expected and tolerated,36 while such expressions in a low-contact service situation, such as in a grocery store, might come off as odd and eerie.27
Neurophysiological measures suggest the segment of the brain that guides compassion becomes more active not only around other humans but also around anthropomorphic robots.
User expectation. Some people can be averse to robots because they are inherently more adverse to change and new things.35 People who are referred to as laggards according to innovation diffusion theory are intrinsically unfavorable of change.35 Subsequently, these people might experience a stronger uncanny valley effect when interacting with humanoids.24
Additionally, previous research suggests that people might have varying cultural expectations of robots. Japanese people are more open to the idea of robots performing interactive tasks such as giving a massage while Europeans expect robots to perform assistive tasks such as snow lowing.17 To prevent biased results, future research should survey the degree of robotic acceptance and robotic expectations before evaluating human responses toward anthropomorphic robots.
Uncanny valley was first theorized by robotics professor Masahiro Mori in the 1970s, where he and his team at the Tokyo Institute of Technology observed an abrupt shift in human attitudes toward robots as anthropomorphism efforts increased.29 People behaved favorable toward anthropomorphic robots at the outset, but as more human-like features were added, the response shifted from excitement to revulsion.29 Indeed, such robots were viewed as a threat to humanity.
The uncanny valley theory has been supported empirically, however, people might experience varied degrees of 'uncanniness' depending on the situational context and the inherent personality. The same emotional expression of a robot can make some people more willing to interact with the robot or exacerbate the sense of uncanniness about interacting.23,40 It is interesting to note that age is a substantial moderating factor for a sense of uncanniness from interacting with humanoids.35 Children older than nine, as the adults did, evaluated the anthropomorphic robot to be creepier than the machine-like robot, but children younger than nine did not.5 Furthermore, a recent study discovers empirical evidence that supports the existence of an additional uncanny valley,22 which suggests the psychological mechanism of why people feel uncanny about anthropomorphic robots is complex. Since the uncanny valley effect is not observed among children younger than nine, there must be confounding factors that cause people to feel strange about humanoids. Insights into potential mediating factors can help us better understand the uncanny valley effect.
Moral agency. Humanoids are not moral agents. They are objects and human moral values do not apply to them.38 However, people tend to assign human characteristics to anthropomorphic objects, and subsequently, humanoids are likely to be seen as moral agents.40 More anthropomorphism invokes stronger empathy toward objects.18 Humanoids that almost feel like humans should elicit a comparable level of empathy people have for fellow humans. Uncanny valley research shows that people's attitudes shift again to the positive end once the robot is anthropomorphic almost comparable to humans.36
Because people see highly anthropomorphic robots as moral agents and feel empathy toward these machines, the use of humanoids can also be a moderating factor for people's evaluation. For example, humanoids that work in a profession that risks life and limb for the public good, such as a counterterrorism unit and a firefighter, should be viewed positively while humanoids in professions that are not considered dignified, such as sex robots, might be seen negatively. In addition, there is no agreement on whether people are ready to have humanoids that elicit such a level of empathy.27
This article discussed the effect of robot anthropomorphism on HRI and potential moderators that might alter the effect. Robot anthropomorphism can improve HRI in terms of people's cognitive and motivational responses. Conversely, robot anthropomorphism might trigger a sense of uncanniness. To address this duality in previous findings, future research might further investigate factors that might moderate the effect of robot anthropomorphism on HRI.
1. Aggarwal, P. and McGill, A.L. Is that car smiling at me? Schema congruity as a basis for evaluating anthropomorphized products. J. Consumer Research 34, 4 (2007), 468–479.
2. Bhimasta, R.A. and Kuo, P. What causes the adoption failure of service robots? A case of Henn-na Hotel in Japan. In Proceedings of the 2019 ACM Intern. Joint Conf. Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACM Intern. Symp. Wearable, (London, U.K., 2019) 1107–1112.
3. Blut, M., Wang, C., Wünderlich, N.V. and Brock, C. Understanding anthropomorphism in service provision: a meta-analysis of physical robots, chatbots, and other AI. J. Academy of Marketing Science 49, (2021), 632–658.
4. Borgi, M., Coliati-Dezza, I., Brelsford, V., Meints, K. and Cirulli, F. Baby schema in human and animal faces induces cuteness perception and gaze allocation in children. Frontiers in Psychology 5, (2014), 411.
5. Brink, K.A., Gray, K. and Wellman, H.M. Creepiness creeps in: Uncanny valley feelings are acquired in childhood. Child Development 90, 4 (2017), 1202–1214.
6. Broadbent, E. et al. Robots with display screens: A robot with a more humanlike face display is perceived to have more mind and a better personality. PLOS One 8, 8 (2013), e72589.
7. Cann, O. Machines will do more tasks than humans by 2025 but robot revolution will still create 58 million net new jobs in next five years, 2018; https://bit.ly/45CWfW8
8. Cheng, Y., Qiu, L., and Pang, J. Effects of avatar cuteness on users' perceptions of system errors in anthropomorphic interfaces. In Proceedings of the 2020 Intern. Conf. Human-Computer Interaction, (Copenhagen, Denmark, 2020), 322–330.
9. Chita-Tegmark, M., Lohani, M. and Scheutz, M. Gender effects in perceptions of robots and humans with varying emotional intelligence. In Proceedings of the 14th ACM/IEEE Intern. Conf. Human-Robot Interaction. (Daegu, South Korea, 2019), 230–238.
10. Choi, S., Mattila, A.S. and Bolton, L.E. To err is human(-oid): How do consumers react to robot service failure and recovery? J. Service Research 24, 3 (2021), 354–371.
11. Chun, B. and Knight, H. The robot makers: An ethnography of anthropomorphism at a robotics company. ACM Trans. Human-Robot Interaction 9, 3 (2020), 1–36.
12. Colle, A. The role of aesthetics in robotics and the rise of polymorphic robots. Companion of the Proceedings 2020 ACM/IEEE Intern. Conf. Human-Robot Interaction, (New York, N.Y., USA, 2020) 623–624.
13. Diehl, J., Schmitt, L.M., Villano, M. and Crowell, C. The clinical use of robots for individuals with autism spectrum disorders: A critical review. Res Autism Spectr Disord 6, 1 (2012), 249–262.
14. Ellingrud, K. The upside of automation: New jobs, increased productivity and changing roles for workers, 2018; https://bit.ly/45Hw7JQ
15. Epley, N., Waytz, A., Akalis, S. and Cacioppo, J.T. When we need a human: Motivational determinants of anthropomorphism. Social Cognition 26, (2008), 143–155.
16. Hanson, D. Why we should build humanlike robots. IEEE Spectrum (2011).
17. Harling, K.S., Mougenot, C., Ono, F., and Watanebe, K. Cultural differences in perception and attitude towards robots. Intern. J. Affective Engineering 13, 3 (2014), 149–157.
18. Harrison, M. and Hall, A. Anthropomorphism, empathy, and perceived communicative ability vary with phylogenetic relatedness to humans. J. Social, Evolutionary, and Cultural, Psychology 4, 1 (2010), 34–48.
19. Hoenen, M., Lübke, K.T. and Pause, B.M. Non-anthropomorphic robots as social entities on a neurophysiological level. Computers in Human Behavior 57, (2016), 182–186.
20. Ito, M. Softbank's Pepper robot debuts as coffee machine salesman at Bic Camera, 2014; https://bit.ly/40c1T0b.
21. Kappes, A., Harvey, A.H., Lohrenz, T., Montague, P.R. and Sharot, T. Confirmation bias in the utilization of others' opinion strength. Nature Neuroscience I, (2020), 130–137.
22. Kim, B., Bruce, M., Brown, L., Visser, E. and Phillips, E. A Comprehensive approach to validating the uncanny valley using the anthropomorphic RoBOT (ABOT) database. In Proceedings of the 2020 Systems and Information Engineering Design Symp. (Charlottesville, VA, USA, 2020), 1–6.
23. Kim, S. and McGill, A.L. Gaming with Mr. Slot or gaming the slot machine? Power, anthropomorphism, and risk perception. J. Consumer Research 38, 1 (2011), 94–107.
24. Kim, S.Y., Schmitt, B.H. and Thalmann, N.M. Eliza in the uncanny valley: Anthropomorphizing consumer robots increases their perceived warmth but decreases liking. Marketing Letters 30, (2019), 1–12.
25. Maeng, A. and Aggarwal, P. Facing dominance: Anthropomorphism and the effect of product face ratio on consumer preference. J. Consumer Research 44, 5 (2018), 1104–1122.
26. Markets and Markets Analysts. Service Robots Market (2021); https://bit.ly/3MfpwPL.
27. Mende, M., Scott, L., van Doorn, J., Grewal, D. and Shanks, I. Service robots rising: How humanoid robots influence service experiences and consumer responses. J. Marketing Research 56, 4 (2019), 535–556.
28. Mitchell, W., Szerszen, K., Lu, A., Schermerhorn, P., Scheutz, M. and MacDorman, K. A mismatch in the human realism of face and voice produces an uncanny valley. I-Perception 2, (2011), 10–12.
29. Mori, M. The uncanny valley. IEEE Spectrum (2012).
30. Mourey, J.A., Olson, J.G., and Yoon, C. Products as pals: Engaging with anthropomorphic products mitigates the effects of social exclusion. J. Consumer Research 44, 2 (2017), 414–431.
31. Natarajan, M. and Gombolay, M. Effects of anthropomorphism and accountability on trust in human-robot interaction. In Proceedings of the 2020 ACM/IEEE Intern. Con. Human-Robot Interaction, (Cambridge, U.K., 2020), 33–42.
32. Phillips, E., Zhao, X., Ullman, D. and Malle, B.F. What is human-like? Decomposing robots' human-like appearance using the anthropomorphic roBOT (ABOT) database. In Proceedings of the 2018 ACM/IEEE Intern. Conf. Human-Robot Interaction. ACM, New York, N.Y., USA, 105–113.
33. Pixteren, M. et al. Trust in humanoid robots: Implications for services marketing. J. Service Marketing 33, 4 (2019), 507–518.
34. PwC. Will robots really steal our jobs? An international analysis of the potential long-term impact of automation, 2018.
35. Rangaswami, A. and Gupta, S. Innovation Adoption and Diffusion in the Digital Environments: Some Research Opportunities. New Product Diffusion Models. Springer, New York, NY, USA, 2020.
36. Roesler, E., Naendrup-Poell, L., Manzey, D. and Onnasch, L. Why context matters: The influence of application domain on preferred degree of anthropomorphism and gender attribution in human– robot interaction. Intern. J. Social Robotics 14, (2022), 1155–1166.
37. Sheehan, B., Jin, H.S. and Gottlieb, U. Customer service chatbots: Anthropomorphism and adoption. J. Business Research 115, (2016), 14–24.
38. Torrance, S. Ethics and consciousness in artificial agents. AI & Soc 22, (2008), 495–521.
39. Ullman, D. and Malle, B.F. What does it mean to trust a robot? Steps toward a multidimensional measure of trust. Companion of the Proceedings 2018 ACM/IEEE Intern. Conf. Human-Robot Interaction, (New York, N.Y., USA, 2018), 263–264.
40. Visser, E.J. et al. Almost human: Anthropomorphism increases trust resilience in cognitive agents. J. Experimental Psychology: Applied 22, 3 (2016), 331–349.
© 2024 Copyright held by the owner/author(s).
Request permission to (re)publish from the owner/author
The Digital Library is published by the Association for Computing Machinery. Copyright © 2024 ACM, Inc.
No entries found