Software agents are intended to perform certain autonomous tasks on behalf of their users. In many situations, however, the agent's competence might be insufficient to produce the desired result. But instead of simply giving up and leaving the whole task to the user, a much better alternative is for the agent itself to identify the cause of the problem, communicate it to another agentor to the userthat is able (and willing) to help, and use the results to proceed toward the original goal.
Even if the user has to intervene from time to time, most do so happily if an automated agent might otherwise run into trouble. The paradigm of programming by demonstration (PBD), also known as programming by example, provides a framework for the kind of dialogue required in such situations. The user and the agent share their individual abilities to overcome not only the current problem but to extend the agent's skills. User intervention that teaches an agent new skills may avoid similar difficultiesand additional trainingin the future.
Our system, called Trainable Information Assistants (TrIAs), released in 1999 as part of the Planning Assistance for the Net project funded by by the German Ministry of Education, Science, Research, and Technology, introduces a new application for PBD techniquesgenerating scripts for extracting information from Web sites. Unfortunately, many Web sites change their look and structure frequently, exasperating agents that are less than flexible enough to deal with unexpected situations. Wouldn't it be helpful if an agent could tell the user about its limitations and ask what to do now and in similar situations in the future? Here, we show how such agents can collaborate in the training dialogue, guiding users to teach them the right lessons to solve a particular problem.
Many services on the Web act as "users" of other Web pages they exploit as resources; these services include "metasearch engines," such as Metacrawler, which queries search engines, and shopping-assistant agents, such as Excite's Jango and Frictionless Commerce's Tete-a-Tete for querying merchant sites. These "metaagents" look up information by issuing standard HTTP requests, just as a user would through a browser. The resource Web site then returns a page containing the relevant information, including, say, search hits, merchandise, and price listings, but formatted for human readability, rather than for satisfying some arcane database format. The agent then extracts the useful information from the page, processes it, and displays it for the end user.
Assume a user is preparing for a trip. Using a Web browser, he or she enters the relevant data, such as cities to be visited and budget limitations, leaving the rest to a Web-based travel agent, which then fills in the missing details and offers suggestions about the journey. The agent has to use information it fetches from the Web in real time as the trip is being planned; examples include departure times in train or flight schedules, prices for hotel rooms, and entertainment events in the destinations on the itinerary. If all the answers to the travel agent's questions are indeed found at the expected Web sites, the user is presented the final result in his or her own browser.
An interesting situation occurs whenever a relevant piece of information "should have been there" but cannot be found. Typically, the reason for such failure involves some modification of the site's layout or structure. In today's metasearch engines and shopping agents, the poor guy in charge of maintaining the database of wrappers, or patterns specifying the format of Web pages, has to regularly program yet another procedure for dealing with the Web site's new look. Some newer Web agents also have semiautomated techniques for "learning" new wrappers over time. But they can't help our human traveler immediately, as he or she stumbles upon the Web site's new layout. Why not have the user assume some system maintenance (by updating the agent's knowledge base in an appropriate way) in exchange for a good answer (a travel suggestion that might otherwise be impossible for the agent to construct). Hopefully, other users are doing the same, thus improving overall system performance.
To guide the training dialogue, the document is opened in the user's browser where the user can mark the relevant portion of text or pictures to be extracted and possibly give the system hints as to how to identify a particular piece of information. In the end, a new information-extraction procedure is synthesized and inserted into the database for future use. The Web query language HyQL is used as the target language for programming the information-extraction procedures [2]. HyQL allows both navigation through the Web and characterization of relevant document parts at an abstract level, such that many document modifications do not affect these procedures.
The agent tries to suggest the actions that promise to lead to the most robust wrapper. At each step, the user is given the option of overruling the agent's suggestions. The agent is responsible for suggesting correct actions or hints (depending on the document's structure), while the user is expected to point to only relevant data. The agent takes into account the user's level of expertise in wrapper design and keeps track of how often it has "bothered" the user recentlyto decide whether or not it is appropriate to take the initiative at each opportunity for action.
The actions a user may perform include:
Through each of these actions, the user is guided by submenus showing only the subactions allowed in a given situation. Minimal input data for the agent is the text portion the user wants the agent to extract from the Web page by way of the wrapper. Users who do not want to spend time helping the agent with wrapper construction (to make it more robust) may tell the agent to generate a wrapper without further information. The result is a wrapper that is valid for (at least) the given example.
As an example of how to get an agent to fetch relevant information, consider how the agent can be taught to extract a musical artist's name from an online concert listing. Let's assume that the agent is preparing a list of cultural events in a particular city as part of its planning for a user's trip and needs to extract the name of an artist from a listing of concert information. The user may need to extract the artist's name "Wayne Hancock" from the listing (see Figure 1). There are many ways to characterize this text within the Web page, including that it is in red, is all capital letters, and is to the right of a picture. After demonstrating the Wayne Hancock example, the user would expect the generated pattern to be able to pick out the artist "Dovetail Joint" from the next listing.
The main window of the dialogue in the figure shows that the agent suggests five ranked actions: characterize selection; suggest landmark; suggest context; finish dialogue; and restart the interaction. The recommended action in this case is to let the system characterize the selection.
The user might let the system suggest some landmarks he or she can choose from. The user can also decide to define a landmark by overriding the system's suggestions. The system might then judge the definition of a context to be of no use, so only the corresponding user option is presented in the dialogue.
The display window gives an overview of previously defined wrappersin terms of the corresponding concepts, such as artistand allows for the display of the respective selection, context, and landmark in the HTML document.
Dialogue and wrapper construction involves the generation of a HyQL script from wrapper building blocks that remain unaffected by minor alterations of the Web page. A scoring function reflecting the estimated "robustness" of a particular wrapper is applied to each of the candidate wrappers. The agent disregards wrappers scoring below a certain threshold. The hierarchy of wrapper classes contributes to its score; each class has a specific valuation reflecting its general utility. Each class is a kind of template containing wrapper building blocks with parameters that will be filled in for a particular wrapper. For example, a wrapper using some kind of additional structural information, such as a heading, is generally more robust than one that scans the document for an exact HTML string.
Why not have the user assume some of the system maintenance in exchange for a good answer?
A cost function estimates the computation time required to navigate through the document up to the respective wrapper selection. The agent also searches for the document's prominent features, further altering the assessment, as the wrapper localization and the selection valuation both have to be modified. Context and landmarks also have to be taken into account; they are characterized and assessed as wrapper selectionswithout a dialogue to avoid multiple layers of discussion with the user. A wrapper containing user-defined parts should receive a higher score in light of the user's expertise. This measure reflects the agent designer's belief that the user should have overall control and has good reasongreater value if the user is viewed as a wrapper-design expertfor not accepting the agent's suggestions.
The agent has to consider all reasonable combinations of landmark, context, hint, and selection characterization. Each candidate wrapper is assessed in light of the following criteria:
In the scenario depicted in the figure, the agent computes all combinations of selection characterization, hints, context, and landmark. The agent characterizes the selection as text marked in HTML as a boldface text environment with a localization cost of 10, as it is the 10th occurrence of such an environment, starting from the document head, without landmark or context. A second wrapper weighs other text features, including that it is red and has a particular font size, unlike the text around it. The localization cost reduces to 1, as it is the first such environment in the document, and the valuation of the selection characterization increases relative to the first wrapper. Note that the definition of context cannot improve the wrapper assessment according to the agent's heuristics, so only the corresponding user action is possible.
Referring to a set of wrapper assessments, the agent has to decide which actions it should suggest to the user. Starting from the assessment of the currently favored wrapper0 at the beginningthe ranking of an action A is computed using several built-in evaluation steps:
Using these steps to rank an action means that suggested action A should add to the (currently) best assessment, though the agent also has to account for the individual user's difficulty executing the action. However, the user-dependent annoyance level might be reached if, say, the agent notices it has bothered the user a certain number of times with information requests. Therefore, actions entailing further definitions are given a lower ranking, so the agent proposes to either take the best wrapper available or switch to an automatic mode in which it constructs a wrapper autonomously without feedback.
Returning to the online trip-planning example, the agent produces the highest assessment for the wrapper, incorporating as a hint the search for the upper-case red text and as a landmark an image in front of the text. The action "characterize selection" reaches the highest ranking, as the wrapper containing only the hint has a higher score than the one with only the landmark. Note that the aim of the agent is to then lead the user to the action "suggest landmark" in order to achieve a high assessment of the wrapper with both hint and landmark. Each suggested action might therefore involve follow-up actions intended by the agent. The system does not propose a context, as the ranking is too low, but the user is given the opportunity to do so.
After each user action, the agent computes a new ranking that incorporates the information collected so far. Having achieved the best wrapper assessment according to its own quality criteria, the agent suggests to the user that he or she accept this wrapper and finish the dialogue. Using the wrapper templates and the collected information, the agent constructs a concrete HyQL script for extracting the specified HTML portion it hands over to the information broker module in TrIAs.
The PBD agent thus makes possible the supervised construction of wrappers, as well as the completely user-guided generation of wrappers by overruling system suggestions and any combination of system guidance and user initiative.
Although we have not yet performed a rigorous evaluation of these dialogue strategies, we have derived a few general lessons from our first version of the TrIAs PBD environment [2]. This version did not include ranked suggestions for future user actions but did include a simple graphical interface that left all decisions exclusively to the user.
In our own interaction with the system, we encountered two notable questions:
What should I do next? To decide, users need some idea of the benefits a learning agent would gain from their actions. As the internal processes are deliberately hidden from the user in most PBD systems, it is usually not possible for the user to make an informed decision on which steps should be taken next.
Can I stop? Should the user continue the training process at all. After all, it makes no sense to provide more and more information to the learning agent if it already has enough about the task at hand to generate a good solution. To decide, the user has to reason about the potential effect of further actions on the result of the agent's education.
Total freedom for users is useful just in case they have sufficient background knowledge to make an informed decision. As we did not want to bother users with too many technical details, the training dialogue addresses these questions in three ways:
Note, too, that users can ignore all system suggestions whenever they feel the need to do something completely different.
One peculiarity of the TrIAs application scenario is the exchange of roles of service provider and consumer between user and system. Instead of concentrating exclusively on the learning agent's state, we had to account for the user's "felicity," Kurt VanLehn's term for a readiness to learn information relevant to a problem at hand [5].
However, more than just applications in which a possibly unwilling user happens to be involved in a training session can benefit from the thorough design of trainable components. We used the same PBD approach in TrIAs to implement the InfoBeans system (also released 1999), in which even naive users configure their own Web-based information services to satisfy their personal information needs [3]. As the InfoBeans application adheres to the tradition of PBD-systems in which the user trains the system to recognize certain types of situations to be managed autonomously, we removed the "annoyance" factor when assessing the expected utility of some system suggestion [4]. Experiments and experience will show which of these versions achieves greater user acceptance.
We have sought here to make a case for using PBD techniques not only during the initial agent-training phase but also during execution of procedures the agent has so acquired. This scenario requires the system to account for its users much more carefully than they are in conventional PBD scenarios. After all, users have to be able to repair the faulty behavior of agents from which they expect some useful service.
Early informal tests with nonprogrammer users indicate that the training mechanism we've outlined enables many of them to deal with the subtleties of procedures for identifying problems involved in extracting information from Web-based sources. Besides the application scenario we sketched here, we can imagine many other uses of what we call "instructable" information agents, ranging from intelligent notification services to data warehouses.
1. Bauer, M. and Dengler, D. InfoBeans: Configuration of personalized information services. In Proceedings of the International Conference on Intelligent User Interfaces (IUI'99), M. Maybury, Ed. (Los Angeles, Jan. 58). ACM Press, New York, 1999, 153156.
2. Bauer, M. and Dengler, D. TrIAs: Trainable information assistants for cooperative problem solving. In Proceedings of the 1999 International Conference on Autonomous Agents (Agents'99), O. Etzioni and J. Müller, Eds. (Seattle, May 15). ACM Press, New York, 1999, 260267.
3. Bauer, M., Dengler, D., and Paul, G. Instructible agents for Web mining. In Proceedings of the International Conference on Intelligent User Interfaces (IUI'00), H. Lieberman, Ed. (New Orleans, Jan. 912). ACM Press, New York, 2000, 2128.
4. Lieberman, H., Nardi, B., and Wright, D. Training agents to recognize text by example. In Proceedings of the 1999 International Conference on Autonomous Agents (Agents'99), O. Etzioni and J. Müller, Eds. (Seattle, May 15). ACM Press, New York, 1999, 116122.
5. VanLehn, K. Learning one subprocedure per lesson. Artif. Intel. 31, 1 (1987).
©2000 ACM 0002-0782/00/0300 $5.00
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2000 ACM, Inc.
No entries found