In U.S. defense research contracting, contracting officers must address and complete administrative details for contract acquisition. A primary source of support comes from the Defense Acquisition Deskbook (web2.deskbook.osd.mil/default.asp) and its FAQ lists. However, contracting officers typically ask specialized questions that require human experts for resolution. Automation is one way to reduce the burden on the human resources of contracting agencies during the contract acquisition process. Software agents provide an approach to automation that has proved successful in several areas (such as buying and selling online, coordinating software development projects, monitoring financial transactions, and retrieving information). The combination of past work on automation and the ability of agents to search vast repositories of information [7] suggests that an agent technique is promising for contract acquisition.
The Multi-Agent Contracting System (MACS) has been developed to automate responses to contracting officers' queries during the pre-award phase of the contract acquisition process. MACS agents are modeled after the expertise and activities required of contracting officers in defense contracting.
The key issues affecting system performance, and in turn influencing the acceptance of many agent systems, are their learning ability and user-friendly interfaces. Complete knowledge cannot be encoded into an intelligent agent system a priori; thus, systems must be able to learn and apply knowledge gained from experience to improve their performance [5]. To improve system performance and increase user acceptance, MACS incorporates both a learning capability and a natural language (NL) interface. These features contribute to the continual improvement of MACS; the NL interface enhances learning, and learning improves the NL interface via a positive feedback loop. This is particularly important in light of parser limitations [4] and the fact that user input may be interpreted inappropriately.
MACS was developed in order to evaluate the potential of learning and an NL interface on system performance. The MACS architecture implements a typical three-tiered brokered architecture containing nine agents: User, Facilitator, Natural Language Process (NLP), Bayesian Learning (BL), and five specialty agents (SAs). The User agent is the highest level, interfacing with users through keyword searches or NL queries. The Facilitator agent is responsible for interfacing between the User agent and each of the other agents in MACS while also coordinating agent activities. The seven remaining agents interface with the Facilitator agent and are responsible for resolving user queries.
For keyword queries, the Facilitator agent forwards user queries from the User agent directly to the BL agent. For NL queries, the Facilitator agent forwards a user query to the NLP agent for parsing. The parsed message is returned to the Facilitator agent and then forwarded to the BL agent. In both cases, the BL agent creates an action plan that is issued to the Facilitator agent for completion. The action plan determines which SA(s) should be contacted to resolve a query. The Facilitator agent completes the plan by performing the necessary communication among the agents. This communication leads to solutions being sent from an SA to the Facilitator agent and then to the User agent. The Facilitator also forwards information regarding which agents responded to which queries to the BL agent, so it can learn response plans for similar queries in the future.
The five SAs in MACS relate to the pre-award phase of a contract and include mutually exclusive areas of expertise: Forms, Justification, Evaluation, Synopsis, and Contracts. The Forms agent identifies the forms needed to complete procurement request packages. The Justification agent indicates when justification and approval are required to complete procurement requests. The Evaluation agent provides guidelines for proposal evaluation. The Synopsis agent identifies types of synopses for given procurement requests. Lastly, the Contract agent identifies the type and nature of contracts.
Because the domain knowledge of the SAs is mutually exclusive, direct coordination among SAs is not required. Instead, the Facilitator agent coordinates the SAs. The learning capability allows the BL agent to learn which SA(s) should receive incoming messages in order to minimize the number of communications required among agents. Information learned by the BL agent is passed to the Facilitator agent for efficient query resolution.
Implications of the MACS architecture include:
Because intelligent agent performance can be sensitive to initial knowledge distribution among agents in a multiagent system [2], systems built on a knowledge base tend to degrade significantly as the limits of knowledge are reached [5]. Thus, we focus on learning for enhanced performance, especially in combination with an NL interface. Rather than learning the preferences of other agents, MACS learns the abilities of other agents, as well as user objectives, to enhance its own performance andefficiency.
Learning occurs in two parts of MACS: Bayesian learning applied in the BL agent and reinforcement learning applied in the NLP agent. Bayesian learning applies a Bayesian model for learning which of the SAs should receive incoming queries, in the following steps:
1. Parsed output from NLP agent sent to BL agent. For each SA:
1.1. Calculate the percentage of time each keyword appears in prior queries;
1.2. Calculate the likelihood that a new query, q, corresponds to the domain knowledge of that SA by multiplying percentages calculated in 1.1 that correspond to q;
1.3. Apply the Bayesian formula;
1.3.1. Multiply the likelihood that q should be sent to a particular SA given no prior queries (prior probability) by the result from 1.2;
1.3.2. Sum all calculations from 1.3.1;
1.3.3. Divide each individual result from 1.3.1 by 1.3.2;
1.4. Divide calculation for each SA from 1.3.3 by prior probability;
1.5. Sort results in descending order;
1.6. Rank SA according to result from 1.4 (highest = rank 1);
1.7. If SA with rank = 2 is within 0.001% of SA with rank = 1;
1.7.1. Then Send q to all SAs with rank = 1 or rank = 2;
1.7.2. Else Send q to all SAs with rank = 1; and
2. Update prior probabilities (learning) with result in 1.4.
The updated probabilities are used as a basis for routing user queries to SAs in the future.
Reinforcement learning [6] involves agents acquiring new knowledge through feedback from previous experience and the environment. A reinforcement signal results from an agent's actions, and the agent learns to improve its performance based on these signals [1]. MACS learns what a user is querying based on similar past questions. When a user inputs a new NL query, the query either does or does not parse. If it parses, the Facilitator agent forwards the query to the BL agent to determine which SA(s) should receive the query. If it does not parse, reinforcement learning is invoked to help resolve the query.
First, a cache is searched to determine whether the NLP agent has already learned how to respond to the unparsable query. If so, the NLP agent applies its previous learning and exchanges the unparsable query for a rephrased, parsable one that asks the same question. Feedback can be provided to identify parsed queries that do not adequately represent the current query. In this case, the cached query is not used. If the unparsable query is not present in the cache, the user is asked to rephrase the query. Once rephrased, the original and the new queries are sent to the NLP agent. If the rephrased question parses, it then serves as feedback for the future. In this case, MACS learns from user reinforcement and is then able to resolve the original query in future user sessions without requiring feedback from the user, thus reducing the burden on users of having to reword unparsable queries. A particular user may thus be overburdened by the need to rephrase a query an unreasonable number of times. However, the robustness of the NLP agent for handling many grammatical forms suggests this situation would be the exception rather than the norm [8].
The NLP agent in MACS is an ATTAIN parsera package of NL Open Agent Architecture (OAA) agents providing parsing and translation of English sentences into the Interagent Communication Language (ICL). ICL expressions are internal OAA representations of the NL query that agents can act on. These expressions are sent to the SA(s) for query resolution.
ATTAIN allows for both active and passive voice constructions, extensive use of modals (should, could, would), and long verb predicates (long lists of noun phrases and prepositional phrases after the verb). These features enhance the MACS ability to handle the types of queries encountered in the contracting domain, compared to previous versions of MACS. Examples of the types of questions that might be parsed by the upgraded MACS system include: Which contract type do I submit if my proposal deals with university research? How do I determine the scoring of evaluation criteria for competitive solicitations?
However, ATTAIN is unable to handle conditional phrases to the extent needed by MACS. It also has problems handling numbers that function as modifiers (such as "5 hours") and cannot use certain special characters (such as & and $). These problems are overcome by modifying queries into parsable phrases, rendering multiterm tokens that include numbers into single-term tokens by using underscores between the terms (such as DD_Form_1498) and expanding & and $ to "and" and "dollars," respectively.
A series of sample screens illustrate how MACS works [3]. The user is presented with the NLP submission forma Web page that manages a session. The user, U, submits an unparsable query, say, "What justification type do I need if I am working with a sole source contract?" MACS asks the user to rephrase and categorize (see Figure 1). The scenario might go like this:
User U now submits a sentence that is answerable: "What do I include in a sole-source justification?" The answer is presented to U, and reinforcement learning occurs at the bottom of the screen:
A new user, V, submits the same query (see sentence 2 in Figure 2) that was previously unparsable. MACS has learned to answer this query (see Figure 3). The user is notified that the query was replaced with its synonymous counterpart.
This scenario highlights several MACS advantages:
The MACS multiagent system is designed for learning, using NL processing to enhance that learning. Both Bayesian learning and the NL interface function with the system architecture to improve system performance. The modular design makes it easy to extend or upgrade as necessary, increasing the useful lifetime of MACS while reducing the burden on human contracting officers.
The MACS features explored here suggest ways in which multiagent systems can become even more useful. They are particularly promising because defense contracting relies heavily on peoplean expensive and valuable resource. They can also be extended to other application areas. While MACS is a work in progress, the prototype has served to identify key issues about system performance and provide directions for how to address them.
1. Jouffe, L. Fuzzy inference system learning by reinforcement methods. IEEE Trans. Syst. Man. Cybernet. Part C: Appl. Rev. 28, 3 (1998).
2. MacIntosh, J., Conry, S., and Meyer, R. Distributed automated reasoning: Issues in coordination, cooperation, and performance. IEEE Trans. on Syst., Man, and Cybernet. 21, 6 (1991), 13071316.
3. Maulsby, D. and Witten, I. Teaching agents to learn: From user study to implementation. IEEE Comput. 30, 11 (1997), 3644.
4. Nardi, B., Miller, J., and Wright, D. Collaborative, programmable intelligent agents. Commun. ACM 41, 3 (Mar. 1998), 96104.
5. Odetayo, M. Knowledge acquisition and adaptation: A genetic approach. Expert Syst. Applic. 12, 1 (1995), 313.
6. Prasad, M. and Lesser, V. Learning situation-specific coordination in cooperative multi-agent systems. Auton. Agents Multi-Agent Syst. 2 (1999), 173207.
7. Wang, H., Mylopoulos, J., and Liao, S. Intelligent agents and financial risk monitoring systems. Commun. ACM 45, 3 (Mar. 2002), 8388.
8. Yoon, V., Rubenstein Montano, B., Wilson, T., and Lowry, S. Development of a Natural Language Interface for the Multi-Agent Contracting System (MACS). Working paper, University of Maryland, Baltimore County, 2003.
©2005 ACM 0001-0782/05/0300 $5.00
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2005 ACM, Inc.
No entries found