acm-header
Sign In

Communications of the ACM

Communications of the ACM

Credibility and Computing Technology


For most of computing's brief history, people have held computers in high regard. A quick review of the popular culture from the past few decades reflects people's general confidence in computing systems. In cinema and literature, computers are often portrayed as infallible sidekicks in the service of humanity. In the consumer realm, computer-based information and services have been marketed as better, more reliable, and more credible sources of information than humans. Consider, for example, computerized weather prediction, computerized automotive analysis, and so-called computer dating. In these and other areas, the public has generally been led to believe that if a computer said it or produced it, it was believable.

But like many aspects of our human society, computers seem to be facing a credibility crisis. Due in part to the popularization of the Internet, the cultural myth of the highly credible computer may soon be history. Although healthy skepticism about computers can be a good thing, if the pendulum swings too far in this direction, computers—especially with respect to Web-based content—could be viewed as among the least credible information sources, rivaling TV infomercials and supermarket tabloids for such dubious distinction.

What is credibility? What makes computers credible? And what can we, as computer professionals, do to enhance the credibility of the products we design, build, and promote? We don't fully answer these questions here, but we define key terms, summarize knowledge on computer credibility, and suggest frameworks for understanding issues in this domain.

Back to Top

Believability

Credibility can be defined as believability. Credible people are believable people; credible information is believable information. Some languages use the same word for these two English words. In our research, we have found that believability is a good synonym for credibility in virtually all cases.

The academic literature on credibility dates back to the 1950s, arising mostly from the fields of psychology and communication. As a result of this research, scholars of credibility generally agree that credibility is a perceived quality; it doesn't reside in an object, a person, or a piece of information. Rather, in discussing the credibility of a computer product, one is always discussing a human perception or evaluation of an object's credibility. Scholars also agree that credibility results from evaluating multiple dimensions simultaneously. Although the literature varies on how many dimensions contribute to credibility evaluation, the vast majority of researchers identify "trustworthiness" and "expertise" as the two key components of credibility [10].

Trustworthiness is defined as well-intentioned, truthful, and unbiased. The trustworthiness dimension of credibility captures the perceived goodness or morality of the source (see Berdichevsky et al.'s "Toward an Ethics of Persuasive Technology" in this issue). Rhetoricians in ancient Greece used the term "ethos" to describe this concept. Expertise is defined as knowledgeable, experienced, and competent. The expertise dimension of credibility captures the perceived knowledge and skill of the source. In other words, in evaluating credibility, a person assesses both trustworthiness and expertise to arrive at an overall credibility assessment. Together, these ideas suggest that highly credible computer products are perceived as having high levels of trustworthiness and expertise (see the sidebar Semantic Problems Discussing Credibility).

When people interact with computers, credibility usually matters, though not always. Computer credibility does not matter when:

  • Users are not aware of the computer (such as in an automobile's fuel-injection system);
  • Users don't recognize the possibility of computer bias or incompetence (such as when using a pocket calculator);
  • Users have no investment in the interaction (such as when idly surfing the Web); and
  • The computer acts only as a transmittal device (such as in a video conference).

Computer credibility cannot be an issue when the user is unaware of the computer itself or when the dimensions of computer credibility—trustworthiness and expertise—are not at stake. In such situations, computer credibility does not matter to the user. But in many situations, credibility is key. We propose seven general categories to describe when credibility matters in human-computer interactions:

When computers act as knowledge repositories. Credibility matters when computers provide data or knowledge to users. The information can be static, such as in simple Web pages or an encyclopedia on CD-ROM. But computer information can also be dynamic. Computers can tailor information in real time for users to, say, help them match interests, personalities, or goals. Users may question the credibility of the information provided.

When computers instruct or tutor users. Computer credibility matters when computers give advice or instructions to users. The advice-giving computer is obvious sometimes. For example, automobile navigation systems give advice about which route to take; online help systems advise users on solving problems; and portable computers advise users on when to plug in before the battery power runs out. These are clear instances of computers giving advice. The advice from a computing system can also be subtle; for example, interface layout and menu options can be a form of advice. Consider too a default button on a dialog box. The fact that one option is automatically selected as the default suggests that certain paths are more likely or profitable for most users. One can imagine that when default options are chosen without care, the program could lose credibility to some degree.

When computers report measurements. Computer credibility is at stake when computing devices act as measuring instruments, like those in engineering (such as an oscilloscope), medicine (such as a glucose monitor), geography (such as devices with Global Positioning System technology), and more. In some circles, such as in the test and measurement arena, introducing digital measurement instruments to replace analog devices in the 1970s and early 1980s raised questions about credibility. Many engineers felt digital technology was less credible than analog technology, often preferring to keep their analog devices rather than adopt their newer digital counterparts.

When computers report on work performed. Computers need credibility when reporting to users on work the computers themselves have performed. For example, computers report the outcome of a software installation, the eradication of a virus, or the spelling check of a document. In such cases, the credibility of the computer is at issue if the work it reports does not match what actually happened. For example, suppose a user runs a spell check, and the computer reports no misspelled words. If the user later finds a misspelled word, the credibility of the program suffers.

When computers report on their own state. Computers reporting on their own state produce reports with credibility implications. For example, they may report how much disk space they have left, how long their batteries will last, and how long a process will take. A computer reporting on its own state raises issues about its competence to convey accurate information about itself and is likely to influence user perceptions of credibility.

When computers run simulations. Credibility is important when computers run simulations, such as those involving aircraft navigation, chemical processes, social dynamics, and nuclear disasters. Simulations can show cause-and-effect relationships, such as the progress of a disease in a population or the effects of global warming. Similarly, they can replicate the dynamics of an experience, such as piloting an aircraft or caring for a baby. In all cases, simulations are based on rules provided by humans—rules that may be flawed or biased. Even if the bias is unintentional, when users perceive the computer simulation lacks veridicality, or authenticity, the computer application loses credibility.

When computers help render virtual environments. Related to simulations is the computer's ability to help create virtual environments for users. Credibility is important in making these environments believable, useful, and engaging. However, virtual environments don't always need to match the physical world; they simply need to model what they propose to model. For example, like good fiction or art, a virtual world or a fanciful arcade game can be highly credible if the world is internally consistent.

These seven categories are not exhaustive; future work on computer credibility is needed to add to and refine them. Neither are these categories mutually exclusive; a complex system, such as an aviation navigation system, might incorporate elements from various categories to present information about, say, weather conditions, airspeed, visual simulations, and the state of the onboard computer system.

Back to Top

Four Types of Credibility

Although psychologists have outlined the factors contributing to credibility, their earlier research has not specified the various types of credibility. But for all of us who come in contact daily with computers, identifying these types is useful, especially when pondering how computing systems gain—or lose—credibility. Therefore, we propose four types of credibility: presumed; reputed; surface; and experienced. Our intent in defining these categories is to provide new ways to think about (and a richer vocabulary for discussing) computer credibility to enhance our collective ability to research, design, and evaluate credible computers. The overall assessment of computer credibility may rely on evaluating all four types simultaneously. Their descriptions, which follow, do as much as possible to summarize the related literature (for a more thorough treatment, see [3]).

Presumed credibility. "Presumed credibility" describes how much the perceiver believes someone or something because of general assumptions in the perceiver's mind. For example, people assume their friends tell the truth, so they view their friends as credible. In contrast, people assume car salespeople may not always tell the truth and therefore lack credibility. The negative view of car salespeople is a stereotype, but that's the essence of presumed credibility; assumptions and stereotypes contribute to credibility perceptions. As we said earlier, all people tend to make assumptions about the credibility of computers [8, 12]. One line of related research investigates the presumed credibility people attach to computers, perceiving them as being "magical" [2]; having an "aura of objectivity" [1]; having a "scientific mystique" [1]; having "superior wisdom" [11]; and being "faultless" [11]. In short, researchers often suggest that people are generally "in awe" of computers and "assign more credibility" to computers than to humans [1].

What does the empirical research show? The studies that directly examine assumptions about computer credibility conclude that computers are not perceived as more credible than human experts [1] and may also be perceived as less credible [7, 12]. It's surprising that no solid empirical evidence supports the idea that people perceive computers as being more credible than humans. However, despite the lack of empirical evidence, it seems clear that many cultures assume computers are highly credible, providing expertise without introducing bias. That's presumed credibility.


Once users perceive a computer product lacks credibility, they are likely to stop using it, leaving it no opportunity to regain its credibility.


Reputed credibility. "Reputed credibility" describes how much the perceiver believes someone or something because of what third parties have reported. In our everyday interactions, reputed credibility plays an important role, as in, say, prestigious awards (such as the Nobel Prize) or official titles (such as Doctor and Professor) granted by third parties that tend to make people seem more credible.

In applying this phenomenon to the world of technology, researchers labeled a technology as a "specialist" as part of an experiment. Their results showed that people perceived the technology thus labeled to be more credible than the one labeled "generalist" [9]. Only the label was different, but it made a significant difference in user response to that technology.

Reputed credibility for computing technology extends beyond labeling effects. A third party, such as the magazine Consumer Reports, may run tests showing that Intuit makes highly accurate tax software. This third-party report would give Intuit's computer products a high level of reputed credibility. On the Web, reputed credibility is pervasive. A link from one Web site to another site is often viewed as a third-party endorsement, likely to increase the linked site's perceived credibility.

Surface credibility. "Surface credibility" describes how much a perceiver believes someone or something based on simple inspection. With surface credibility, people are judging a book by its cover. In the world of human relationships, we make credibility judgments of this type nearly automatically. The way people dress or the language they use immediately influences our perception of their credibility. The same holds true for computer systems and applications. For example, a Web page may appear credible just because of its visual design. The solid feel of a handheld computing device can lead users to perceive it as credible.

A recent line of research weighed the effects of interface design on perceptions of computer credibility [5]. These experiments showed that—at least in laboratory settings—certain interface design features, such as cool color tones and balanced layout, enhanced user perceptions of interface trustworthiness—a component of credibility. Although the design cues for credibility may differ depending on user, culture, and target application, this research represents an important precedent in the scholarship dealing with the effects of interface design on perceived trustworthiness and credibility.

Experienced credibility. "Experienced credibility" refers to how much a person believes someone or something based on first-hand experience. For example, interacting with people over time, we assess their expertise and trustworthiness. This assessment helps us evaluate their subsequent statements or suggestions. For example, tax lawyers who prove themselves competent and fair over time earn the perception of strong credibility with their clients. A similar dynamic holds for our interactions with computing systems or devices. For example, fitness enthusiasts may determine over time that their computerized heart-rate monitors are highly accurate, thus earning strong credibility.

Computing technologies can also lose credibility over time. Consider a spell-checking application that identifies correct words as possibly being misspelled. Users may soon learn that the spell-checker's level of expertise is low and view the application as less credible. Or travelers using an information kiosk may discover the kiosk provides information only for restaurants paying fees to the kiosk owner. While this exclusivity may not be evident at first glance, such bias may become apparent over time and hurt the credibility of the kiosk.

Experienced credibility may be the most complex of the four types of computer credibility. Because it hinges on first-hand experience with the computer product, such credibility includes a chronological component that leads to a dynamic of computer credibility.

Back to Top

Credibility Gained, Lost, Regained

Looking at the dynamics of computer credibility raises three related questions: How is it gained? How is it lost? And how can it be regained? The four types of credibility suggest different ways computers gain or lose credibility.

In examining how computers gain or lose credibility over time, some studies demonstrate what is highly intuitive—that computers gain credibility when they provide information users find accurate or correct [4, 8]; conversely, computers lose credibility when they provide information users find erroneous [4, 6, 8]. Although these conclusions seem obvious, the research is valuable because it represents the first empirical evidence supporting these ideas.

Other findings on the dynamics of credibility are more subtle, focusing on how different types of computer errors influence user perceptions of computer credibility. Although researchers acknowledge that a single error may severely damage computer credibility in certain situations [4], no study has clearly documented this effect. In one study, error rates as high as 30% did not cause users to dismiss an onboard automobile navigation system [4]. But in other contexts, such as getting information from an automated teller machine, a similar error rate would likely cause users to reject the technology completely.

It seems clear that computer errors damage the perception of credibility—to some extent at least. But what are the effects of serious errors vs. trivial ones? One study demonstrated that large errors hurt credibility perceptions more than small errors but not in proportion to the gravity of the error [6]. Another study showed no difference between the effects of large and small mistakes on credibility [4]. This and other work [8] makes one conclusion clear: Small errors by computers have disproportionately large effects on perceptions of credibility.

Once computers lose credibility, they can regain it in one of two ways: by providing good information over a period of time [4] or by continuing to make the identical error, allowing users to learn to anticipate and compensate for the persistent error [8]. In either case, regaining credibility is difficult, especially from a practical standpoint. Once users perceive a computer product lacks credibility, they are likely to stop using it, leaving it no opportunity to regain its credibility via either path [8].

Back to Top

User Variables and Credibility Evaluations

Up to this point, we've discussed computer credibility as though it applied identically to all people. This is not the case. While our field—how individual variables, such as level of expertise and personality type, affect computer credibility—provides limited understanding, four user effects and errors are notable, even if the conclusions are not beyond dispute.

User expertise. Expertise influences how people evaluate the credibility of computing systems. Computer users familiar with particular content (such as an experienced surgeon using a surgery simulation) evaluates the computer product more stringently and is likely to perceive the product as less credible, especially in the face of computer mistakes [4, 7]. Similarly, those not familiar with the subject matter are more likely to view the product as more credible [12]. These findings agree with credibility research outside the field of human-computer interaction [10].

User understanding. What about a user's understanding of how the computer system arrives at its conclusions? The research results are mixed. One study showed that knowing more about the computer actually reduced users' perception of credibility [2]. Other researchers have shown the opposite; users were more inclined to view a computer as credible when they understood how it worked [6, 7]. We clearly need more research in this area.

User need for information. This last point about user variables deals with people's need for information, and how it seems to affect their willingness to accept information from computing technology. People with a greater need are generally more likely to accept information from the technology. Specifically, people in unfamiliar situations or who have already failed at the task when relying only on themselves perceive a computing technology as more credible [4].

Evaluation errors. The existing, though limited, research on user variables confirms what most of us would suspect—that people are more likely to perceive a computer as credible when they lack expertise in the subject matter or face an unfamiliar problem or task. In general, people who lack expertise seem less willing or able to be skeptical about a computing technology designed to help them. But blind faith could lead to mindlessly accepting information or services from a computer. This so-called "gullibility error" means that even though a computer product is not credible, users perceive it to be credible. In contrast, experts face another problem: They may reject information or services from a computer that might have been useful to them—the "incredulity error" (see [6, 11]). (Table 1 relates these two errors as part of a matrix of credibility evaluations.)

While both types of error are relevant to people designing and building computing systems, individuals and institutions (especially those in education) have embraced the mission of teaching people to avoid the gullibility error. They teach information seekers to use credibility cues, such as the accuracy of the information and the authority of the source, to determine what is likely to be credible or not. This work often falls under the heading "information quality" (see, for example, www.vuw.ac.nz/~agsmith/evaln/evaln.htm).

While individuals and institutions seek to reduce errors of gullibility, no outside group has specifically sought to reduce errors of incredulity. The burden for achieving credulity in computing systems seems to rest squarely with those of us who create, design, and distribute the related products. In a larger view, our challenge is to reduce incredulity errors without increasing gullibility errors. An ideal would be to create computing systems that convey appropriate levels of credibility, especially for the purpose of achieving high levels of experienced credibility. We can achieve this goal by improving our understanding of the elements, types, and dynamics of computer credibility.

Back to Top

Looking Ahead

Although it may be an overstatement to say that computers are facing a credibility crisis, the credibility of computing products will be a growing concern—within and outside our professional circles. That's why we outlined key definitions associated with computer credibility, described situations in which computer credibility is a salient factor, and proposed new frameworks for understanding the elements of computer credibility better. Our intent is to raise awareness of computer credibility, hoping to prompt further work and discussion in this area. That work will focus on increasingly specific issues in computer credibility, expanding and revising the topics addressed here.

Back to Top

References

1. Andrews, L., and Gutkin, T. The effects of human vs. computer authorship on consumers' perceptions of psychological reports. Comput. Hum. Behav. 7 (1991), 311–317.

2. Bauhs, J., and Cooke, N. Is knowing more really better? Effects of system development information in human-expert system interactions. In CHI'94 Companion (Boston, Apr. 24–28). ACM Press, New York, 1994, pp. 99–100.

3. Fogg, B., and Tseng, H. The elements of computer credibility. In Proceedings of CHI'99 (Pittsburgh, May 15–20). ACM Press, New York, 1999.

4. Kantowitz, B., Hanowski, R., and Kantowitz, S. Driver acceptance of unreliable traffic information in familiar and unfamiliar settings. Hum. Factors, 39, 2 (1997), 164–176.

5. Kim, J., and Moon, J. Designing towards emotional usability in customer interfaces: Trustworthiness of cyber-banking system interfaces. Interact. with Comput. 10 (1997), 1–29.

6. Lee, J. The dynamics of trust in a supervisory control simulation. In Proceedings of the Human Factors Society 35th Annual Meeting (Santa Monica, Calif., Sept. 2–6). Human Factors and Ergonomics Society, Santa Monica, Calif., 1991, pp. 1228–1232.

7. Lerch, F., and Prietula, M. How do we trust machine advice? Designing and using human-computer interfaces and knowledge-based systems. In Proceedings of the Third Annual Conference on Human-Computer Interaction. Elsevier, Amsterdam, 1989, pp. 411–419.

8. Muir, B., and Moray, N. Trust in automation: Experimental studies of trust and human intervention in a process control simulation. Ergonom. 39, 3 (1996), 429–460.

9. Reeves, B., and Nass, C. The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places. Cambridge University Press, New York, 1996.

10. Self, C. Credibility. In An Integrated Approach to Communication Theory and Research, M. Salwen and D. Stacks, Eds. Erlbaum, Mahwah, N.J., 1996.

11. Sheridan, T., Vamos, T., and Aida, S. Adapting automation to man, culture, and society. Automat. 19, 6 (1983), 605–612.

12. Waern, Y., and Ramberg, R. People's perception of human and computer advice. Comput. Hum. Behav. 12, 1 (1996), 17–27.

Back to Top

Authors

Shawn Tseng ([email protected]) is a research analyst at Quattro Consulting in Sausalito, Calif.

B. J. Fogg ([email protected]) directs the Persuasive Technology Lab at Stanford University's Center for Language and Information in Palo Alto, Calif.

Back to Top

Tables

T1Table 1. Four evaluations of credibility

Back to Top


©1999 ACM  0002-0782/99/0500  $5.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 1999 ACM, Inc.