Advances in network and microprocessor technology have increased the adoption of computer technology in areas such as consumer shopping, banking, voting, and automotive technology. At the same time, widespread proliferation of viruses and recent catastrophic power outages have made the general public all too aware of the associated risks.
Trust may be a crucial factor for the successful introduction of new products and services, including computer technology. However, implementation of poorly analyzed technical solutions can backfire. A specific example of the latter is the introduction of high-technology voting equipment at U.S. polling places in 2004 in an effort to increase public confidence and trust in the election process. Reported problems with the technology raise questions whether fielding this new technology will increase or decrease public trust in the voting process.1
Another example where a valid trust model would be very helpful is the evolution of intelligent vehicles. Applications that will rely on direct inter-vehicle communication (IVC) illustrate a fundamental dilemma of trustalthough it is made up of potentially untrustworthy peers, the network must be survivable and attacks must be detected in a distributed manner. Central to resolving this dilemma is the ability of a host to assess the trustworthiness of entities it encounters. Unfortunately, underlying wireless networks as specified in early commercial systems, emerging standards, and the research literature address neither trust nor survivability concerns. Current proposals for managing the routing layer ignore the potential for these attacks, since planned IVC messages are propagated by individual vehicles either with repeaters or routers [11]. These devices could be hacked to inject false messages, modify messages, or fail to forward messages.
J.B. Rotter defined interpersonal trust as a generalized expectancy held by an individual that the word, promise, oral or written statement can be relied on [12]. Modifying this definition for applicability to human trust in automation, we define trust as the expectation that a service will be provided or a commitment will be fulfilled. With this definition, expectation is a key component. Users' expectations may be based on many things. Some examples include:
Changes in the factors that affect users' expectations will also impact users' trust levels.
The problems of defining trust and trust metrics have primarily focused on public key authentication and e-commerce [8]. Trust models and metrics for public key infrastructure systems address authentication between sender and receiver entities, message integrity, and data confidentiality. These are all aspects of a security model, and often the terms trust model and security model are used synonymously. From a user's point of view, security is extremely important in trusting that computer-based technology will perform the user's intended requested function. However, factors other than security can be as important from the user's perspective. Usability is an important factor as to whether users trust technology. Reliability and availability are additional factors, and often privacy and safety are as well.
A thorough understanding of both the psychological and engineering aspects of trust is necessary to develop an appropriate trust model.
As illustrated in Figure 1, today separate security, usability, reliability, availability, safety, and privacy models within engineering disciplines exist, and all these models incorporate some limited aspects of trust. There is limited or no data sharing between the individual functional models. Current trust models have been developed based on specific security issues and also solely on knowledge, experience, practices, and performance history [5]. In addition, much of the prior research in trust in automation has focused primarily on the psychological aspect of trust [10]. But prior research rarely addressed all these areas.
Proposals have been presented to link some of the psychological aspects of trust with engineering issues. For example, attempts have been made to map psychological aspects of trustreliability, dependability, and integrityto human-machine trust clusters associated with engineering trust issues such as reliability and security [7]. This article will only briefly touch upon some of these psychological aspects of trust, while focusing on a discussion of the engineering aspects for a trust model. Future work will include more direct mappings of the psychological aspects (as described by Jian et al., Dzindolet et al., and Camp, for example) into the trust model [3, 6, 7]. A thorough understanding of both the psychological and engineering aspects of trust is necessary to develop an appropriate trust model.
A comprehensive trust model of computer-based technology must predict how usability, reliability, privacy, and availability (and possibly other factors), as well as security, affect user trust.
A general trust model and accompanying metrics should be used to predict and measure user trust levels in new or updated applications of computer-based technology before committing to full-scale development and installation efforts. The new trust model incorporates security, privacy, safety, usability, reliability, and availability factors into a trust vector, paying careful attention to the interacting differences and synergies. In addition, the trust model will incorporate factors not used in previous models, such as verification techniques, user knowledge, user experience, and trust propagation. Some current models, such as security models, may include some aspects of measuring trust, while others, such as usability models, rarely address the issue of user trust. The new expanded trust model will contain data that can be imported from existing models (that is, security, reliability, availability, usability, privacy, and safety) to form a comprehensive model of user trust of a system. This is illustrated in Figure 2.
This trust model will be usable by individuals and groups with different and possibly conflicting interests. While previous trust models in e-commerce have been developed to measure the trust of a single customer (the purchaser of a product or service), the proposed trust model will be able to measure trust of differing users. For example, a trust model of voting systems might have at least two types of usersvoters and election officialswho may have different levels of trust and uncertainty with various aspects of the voting system. As another example, two classifications of users of an intelligent vehicle communications system would be drivers and traffic safety engineers.
Metrics must be defined to measure user trust and distrust of a system. Quantitative metrics, qualitative metrics, fuzzy metrics, or a combination of these should be used to measure trust levels.
Some aspects of the trust model (for example, cryptographic techniques for enhanced system security or redundancy features to increase system reliability and availability) will be generic and can be applied to more than one system. Other aspects of the model may be specifically designed for a given application system.
The trust model will also explore the connection between verification and trust. Different examples of this connection will be analyzed, including blind trust (no verification required), trust with verification, trust based on knowledge, trust based on experience, and trust between principals and agents (propagation of trust).
Since no system is perfect, the question "Who watches the watchers?" must be addressed. Thus, audit capabilities (both electronic and non-electronic, both by a single user and by multiple trusted agents)will be included in the trust model. The trust model will show the effects of security and verification mechanisms on trust levels.
Implementation and support of cryptographic algorithms are fundamental to the strength of the trust model, regardless of the specific application. In establishing trust in a transaction using a distributed computer system, users will ask one or more of the following questions:
Various cryptographic mechanisms such as blind signatures that allow signature verification without disclosing the contents of the signed document; anonymous signatures that allow electronic data to be authenticated without revealing the identity of the person giving the signature; e-receipts that support authentication, user anonymity, data integrity, and confidentiality in applications such as e-voting and e-commerce; and remote platform integrity challenges, where a remote host can electronically verify the integrity of a target platform, can be used to meet these trust model requirements [4]. A non-exhaustive list of variables that will be included in the expanded trust model is shown in Table 1.
Here, we examine more closely how one might apply a trust model to two applications that would benefit from it: voting systems and IVC systems.
Voting Systems. Voting problems in the U.S. elections of 2000 and 2002 have raised the question of whether citizens are losing faith in the integrity of the voting process. The problems in these elections related to voter registration, vote casting, and vote-count tallying led to new initiatives to correct these problems. The most significant remedy enacted was HAVA, the Help America Vote Act. HAVA was drafted specifically with the intention of replacing unreliable voting systems, such as those using punched card ballots, with new systems that employ more advanced technology for vote casting, such as optical scan and direct recording electronic (DRE) machines. In addition, there is a specific section of HAVA that charges the newly formed Election Administration Commission to study and report on e-voting and the electoral process. Legislators inserted this wording due to concerns about the impact of electronic and Internet technologies on the integrity of voter registration, vote casting, and vote counting aspects of the electoral processes.
In response to HAVA and negative publicity from the 2000 elections, state election boards have turned to electronic technology as the solution for restoring the public's trust in the voting system. In addition to purchasing optical scan and DRE machines, some jurisdictions have implemented Internet registration and voting processes. While the cost, ease of use, and maintenance issues regarding these new technologies are relatively straightforward to evaluate, and there is some early evidence they provide significant improvements in some areas such as usability and reduction of unintended undervoting, questions of security and public trust in the integrity of newly purchased DRE machines are being raised today [9]. In addition (and worse), little attention has been focused on the security and integrity of e-registration aspects of voting, but problems in this area could possibly affect election outcomes to the same extent as problems with vote casting and vote counting. When deciding whether to purchase and deploy new computer technology, election boards (users) have little ability and competence (and no formal method) to assess whether deploying the new technology will maintain or increase public trust in the voting system.
Trust models and metrics can be developed and used to facilitate the successful deployment of new technology to be used by the general public.
We don't question whether Internet and electronic transactions will be used in voting systems; this is inevitable. Market pressures, convenience, and utility have traditionally trumped security concerns. (Automobiles, introduced in 1896, did not have seat belts offered until 1955; these weren't standard equipment until the late 1960s in the U.S.)
We do suggest that trust models and metrics can be developed and used to facilitate the successful deployment of new technology to be used by the general public. In particular, the trust model for e-voting must properly handle as composable subsystems three processes: voter registration, vote casting, and vote counting.
A non-exhaustive list of variables that will be included in the expanded trust model for an e-voting application is shown in Table 2. Metrics for voting systems will include, but not necessarily be limited to:
We posit that the voting system application will be more trusted if audit and verification capabilities are observable and measurable in the registration, vote casting, and vote tally processes. During the registration process, voters and authorized agents should have the ability to view and verify voter registration lists using secure computer technology. During the vote casting process, election workers should have the ability to verify voter authorization by accessing the voter registration list via secure, electronic means and voters should have the ability to verify that their votes were cast as intended. Finally, during the vote tally process, election workers should be able to verify that only authorized cast votes were included in the final vote tally and individual voters should have a method of verifying their vote was included in the final vote tally. The trust model can be used to generate design requirements for a trusted e-voting system.
E-voting systems can be tested to verify that the systems meet specific trust requirements of various groups of users (such as voters and election administration officials) and the systems will be iteratively refined until the systems meet acceptable trust thresholds. At this point, e-voting system users can participate in elections, and then will provide input regarding their trust levels in these voting systems. The trust metrics generated by users of the various e-voting systems will be compared to the trust metrics predicted for the systems. The trust model may be updated as we learn more from using it on various e-voting system configurations and with various user populations.
IVC Systems. An IVC network provides an ideal and complementary application for the trust model. Despite the very different application area, many of the variables in the trust model are identical to the ones in the voting application. It is also a large distributed system, in which privacy, availability, platform integrity, and data integrity are central to its trustworthiness. It also illustrates a different set of variables ranging from its real-time requirements to issues arising from ad hoc wireless networking. Like the voting application, the intelligent vehicle application represents a crucial application in which trust is a roadblock to implementation. Consequently, the trust model is an ideal mechanism to generate the design requirements for a technology that can save lives.
Future generations of in-vehicle Intelligent Transportation Systems (ITS) will network with nearby vehicles for enhanced safety and efficiency. Intelligent vehicles will ascertain the intentions and dynamics of nearby vehicles and of the presence of roadway hazards. These ITS technologies will allow safe tight inter-vehicle spacing of vehicle platoons and coordinate collision avoidance in intersections and be instrumental in degraded visibility conditions such as heavy fog. For efficiency and cost reasons, the wireless communication will ideally be done directly between vehicles.
Although the complete communications architecture underlying these networked ITS applications remains unspecified, proposals are emerging that specify the structure of the architecture's lowest layers. Most of these proposals, as seen in communications standards, commercial efforts, and research systems, share the approach of requiring node cooperation for media access control. The U.S. government anticipates the use of physical and media access controls based on the ANSI/IEEE 802.11 standards. MeshNetworks has developed a commercial system for ITS applications, also based on the 802.11 standards, which manages vehicle-roadside communications and vehicle-vehicle applications. From the perspective of computer security, a salient feature of both systems is the use of the 802.11 Distributed Coordination Function (DCF) for ad hoc networking. Under the DCF standard, an uncooperative node can lead to denial of service for the nodes within its communication range.
For example, a jamming attack could be aimed at vehicle platoons. Since platoons are designed to increase roadway capacity and drivers violate safe following distances routinely, platoons are not designed to implement a fail-safe system and collisions are possible. Collisions should not occur if each vehicle is provided with the lead vehicle's dynamics via inter-vehicle communication. However, if inter-vehicle communication is disrupted, a collision could be severe.
Like the voting application, the intelligent vehicle application represents a crucial application in which trust is a roadblock to implementation.
Figure 3 shows steps leading to such a collision. In Step 1, the lead car of the platoon engages in hard braking due to a vehicle merging into its lane. The lead car transmits its dynamics to the preceding vehicle. In Step 2, a message containing the lead car's dynamics is propagated to the following vehicle. However, in Step 3, the message containing this information does not reach the last vehicle in the platoon, due to the jamming of the wireless signal. If this breakdown occurs in the middle of a long platoon, a serious multi-car pileup can occur.
Unfortunately, in an IVC network, there is no infrastructure within which to provide security services. Instead, nodes must rely on untrusted hosts to provide network management, deliver messages, and provide accurate control data for routing purposes. Moreover, the highly volatile nature of mobile computing makes it difficult to distinguish between malicious and normal behavior.
One potential solution to these challenges is CARAVAN, A Communication Architecture for Reliable Adaptive Vehicular Ad hoc Networks [1, 2]. CARAVAN provides essential security services to prevent a wide array of attacks aimed at the wireless network. Furthermore, it functions in an efficient and scalable manner that effectively manages scarce bandwidth, minimizing collisions, and providing quality-of-service guarantees for the delivery of critical messages.
CARAVAN includes an explicit time-slot allocation media access protocol that mitigates the exposure of the IVC network to denial-of-service attacks. CARAVAN also provides cryptographic libraries to support digital signatures needed for the authentication and integrity of messages, as well as providing confidentiality for control messages; trusted computing platforms (TCPs) to ensure trustworthiness of peers; and spread spectrum techniques for anti-jamming capabilities.
The trust model could help evaluate CARAVAN against alternate proposals. But to do so, the model must be expanded to include metrics specific to the IVC network. Examples of trust model parameters specific to the IVC application are shown in Table 3.
The model could then analyze current proposals for inter-vehicle networking protocols. If the trust model predicts that none of the proposals produces an acceptable level of trust, the model might be used to generate the appropriate design requirements for a trusted IVC. Protocols and their parameterization could then be specified to meet these design requirements.
This article proposed an expanded trust model for distributed computer systems and highlighted some of the variables required in the model. It incorporates aspects of system security, usability, reliability, availability, audit, and verification mechanisms, as well as user privacy concerns, user experience, and user knowledge. Hopefully this will lead to measurable systems that have trust built in, and a scientific community and public that demands such systems.
1. Blum, J. and Eskandarian, A. CARAVAN: A communications architecture for reliable adaptive vehicular ad hoc networks. In Proceedings of the Society of Automotive Engineers World Congress (Detroit, MI, Apr. 2006).
2. Blum, J. and Eskandarian, A. Adaptive Space Division Multiplexing: An improved link layer protocol for inter-vehicle communications. International IEEE Conference on Intelligent Transportation Systems (Vienna, Austria, Sept. 2005).
3. Camp, L.J. Design for trust. In R. Falcone, Ed., Trust, Reputation and Security: Theories and Practice. Springer-Verlang, Berlin.
4. Chaum, D. Secret ballot receipts and transparent integrity; www.vreceipt.com/article.pdf.
5. Daignault, M. and Marche, S. Enabling trust online. In Proceedings of the Third International Symposium on Electronic Commerce (Oct. 2002).
6. Dzindolet, M.T. et al. The role of trust in automation reliance. International Journal of Human and Computer Studies 58, (2003), 697718.
7. Jian, J., Bisantz, A., and Drury, C. Foundations for an empirically determined scale of trust in automated systems. International Journal of Cognitive Ergonomics 4 (2000), 5371.
8. Manchala, D.W. Trust metrics, models, and protocols for electronic commerce transactions. In Proceedings of the 18th International Conference on Distributed Computing Systems (May 1998).
9. Maryland General Assembly, Department of Legislative Services, Trusted Agent Report: Diebold AccuVote-TS Voting System (Jan. 20, 2004).
10. Muir, B. Trust in automation: Part I. Theoretical issues in the study of trust and human intervention in automated systems. Ergonomics 37 (1994), 19051922.
11. Okada, M. et al. A joint road-to-vehicle and vehicle-to-vehicle communications system based on a non-regenerative repeater. In Proceedings of the 50th IEEE Vehicle Technology Conference (Amsterdam, 1999), 22332237.
12. Rotter, J.B. Interpersonal trust, trustworthiness, and gullibility. American Psychologist 35 (1980), 17.
Figure 1. Current functional models.
Figure 2. Expanded trust model including feedback mechanisms to other functional models.
Table 1. Generic trust model parameters.
Table 2. E-voting application-specific trust model parameters.
©2006 ACM 0001-0782/06/0700 $5.00
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2006 ACM, Inc.