Because of its inexpensive and ubiquitous nature, many users regard the Internet as the first and only point of access to information to meet their needs. Yet it is known generally that information accessed via the Internet is ill formed, unorganized, and difficult to access. This leads to a variety of problems and frustrations for users, especially novices. The primary, and sometimes only, means to access much of this information are Internet search engines. When novices use search engines without strong mental models for information retrievalespecially in complex environments such as the Internetthey are not likely to achieve success at information gathering.
Mental models are cognitive constructs of knowledge and experiences used to interpret the world [7]. It has been suggested that a mental model can be developed or strengthened by learning and practicing skills [4]. Mental models represent a collection of knowledge that builds a foundation of understanding and provides the tools for problem solving in a given domain. This is different from a system model, which is a conceptual understanding of how a system works (often called a conceptual model). Users often understand their mental models only through conceptualization as systems models. For instance, people might say, "My mental model for searching for information on the Internet is using Google," when in actuality they are describing a system model. Systems models equate to analogies of specific systems, whereas mental models accumulate broader knowledge in order to anticipate, interpret, and solve complex problems in specific subject domains. A mental model for searching on the Internet encompasses several overlapping domains, including searching in general, IT skills, and knowledge about the subject being searched [1].
Because mental models are complex and difficult to articulate, we must develop ways to identify and characterize them. One approach is to capture elements of domain knowledge as they are being applied or put to use. Such elements are thought to include understanding the structure and functions of a system, as well as the relationships between them and their outcomes [5]. Task knowledge structures (TKS) theory argues the knowledge people possess and employ while performing tasks can be represented in a structured way. A TKS is a summary of knowledge that can be called upon when performing tasks or solving problems associated with a task [3]. The notion of linking TKS to mental models is based on the understanding that such models are built on schemas that equate closely to knowledge structures [9]. We believe that TKS theory, employing knowledge analysis of tasks (KAT) and applied cognitive task analysis (ACTA), can help to represent mental models and give insight into problems with information retrieval in environments like the Internet.
Generally speaking, the method for identifying TKS consists of collecting data related to a task in a domain; analyzing the data (KAT); and creating a model of the task domain [5]. TKS has been used in HCI work to elicit user requirements in designing systems.
The collection step requires selecting and implementing a data gathering methodology. Since a specific one is not proscribed, we used ACTA. This methodology provides a framework for eliciting cognitive aspects of task performance via interviewing techniques. Originally designed to elicit knowledge from experts, ACTA comprises a four-step approach: interview to plot broad outline of task; knowledge audit to identify cues and difficulties associated with cognitive skills; simulation scenario to gather information about cognitive processes while participant is performing task; and a cognitive demands table for information built from the analysis [6]. ACTA was chosen because of its emphasis on identifying cognitive aspects of tasks and its structured methodology, as shown in Table 1. Results of the ACTA are then analyzed using KAT.
The first step in KAT is to break a task down into its component partsobjects and actions, task procedures, and goals and subgoalsfor manipulation during analysis. Objects and actions must be described because they make up the critical elements of a task. Task procedures must be identified to ascertain the knowledge and strategies used in performing a task or procedure. Goals and subgoals are specified to help determine hierarchy of parts of a task.
The second step is to identify representative central and generic properties of tasks. Elements critical to accomplishing a task are separated out from helpful (representative or generic) ones to identify a core set used in a task model (see Table 2).
TKS then prescribes a method for developing a task model based on a procedural substructure and taxonomy. The taxonomy establishes a dictionary of terms used, such as Submit button, Search box, and Results page. A general pseudo-code notation is used where conditional statements and indentation represents relationship between sub-procedures.
Resolve Page-loading Problem Procedure
It is helpful to note which knowledge is used at various points of the TKS, as shown in Table 3. This is generated from the original ACTA, or in some cases, may involve an additional ACTA interview after the TKS is specified.
Thirty-one students at the University of Staffordshire (U.K.) participated in interviews in May 2001 using ACTA. Participants included 24 males and seven females, ranging in age from 18 to 50. They were asked questions that elicited information related to cognitive processes involved in searching and their responses were recorded, transcribed, and analyzed.
Novices seem to have difficulty articulating knowledge behind their skills, partly because they have little experience on which to base descriptions. Because ACTA was designed to elicit information from more articulate experts, it had to be altered. In particular, the questions asked novices for their simple understanding, and sometimes guesses, about why something did or didn't work. Responses reflected "guessing," trial-and-error, and selection based on personal preferences. Initial attempts at trial-and-error are important precursors to experience and expertise. Thus, rather than ask how a technique has helped in the past, novices were asked, "Why did you do that just now, and why at this point?"
Overall, the ACTA interviews revealed strong novice misconceptions of how search engines work (for example, they expressed the belief that all authors of Web pages must register their sites with search engines to be indexed; search engines can synthesize semantic meaning from Web pages; there is no difference between word strings and directory categories). Possibly a stronger conceptual or system understanding of search engines would help to strengthen weak mental models.
By detailing the TKS, it is easier to pinpoint where knowledge is applied to a specific procedure of a subgoal. Using the ACTA interviews alone, one tends to get generalizations and idealized knowledge. Also, it isn't initially clear whether certain responses are based on behavior related to the interview or because of their mental models.
For instance, it appears that at a point where only the first few results from a search have been scanned using the Scan Results List Procedure, novices quickly consider trying a different search or giving up. During interviews, novices mention they are "being lazy" or "in a hurry," with some claiming they would "never go beyond the first page or two" and others stating they "wouldn't scroll past the first screen." Analysis indicates this abruptness could relate to a weak or incomplete mental model of how search engines work.
One aspect seems to be a misunderstanding of how many search engines there are, and how much overlap they have in their indexes of Web files. This is evidenced by the heavy reliance on mostly one search engine at the Determine Search Engine to Use Procedure. Another aspect seems to be a misunderstanding of how search engines display results, with an assumption that if a solution isn't found within the first few results, the search engine doesn't index it. Strong problem-solving skills can make up for such misconceptions, if novices applied formal trial-and-error techniques (for example, reviewing a random sample of additional results on subsequent pages to verify a hypothesis that most relevant ones were "at the top"). However, novices do not always possess such skills; or they don't always understand contexts for using them.
ACTA provides a well-structured methodology for gathering data on cognitive aspects of tasks. It is helpful to adapt the methodology further and apply it to a larger qualitative study to gather more novice responses, which would give additional insight into their mental models [2]. It would also be useful to analyze other tasks related to Internet searching, such as the broader use of IT (skill in using Web browsers, downloading, and so on) to get a more comprehensive picture of novice mental models.
Outcomes from identifying and representing novice mental models basically fall into two areas: teaching to shape or alter mental models, and designing systems that better take them into account. Teaching implies scaffolding novices to emulate experts. In the case of mental models of using Internet search engines, it is possible to identify a series of learning objectives novice users should be able to accomplish. Learning could be constructed in the form of online tutorials or traditional instruction.
However, it is a fallacy to think mental models can be shaped simply by transferring conceptual knowledge of a system from experts to novices. A mental model works by converting system knowledge into task knowledge for use as a tool for anticipating potential difficulties, troubleshooting problems, and solving them. The ACTA interviews can help identify aspects of mental models that could be altered (for example, participants were able to alter their models when asked to use more than one search engine).
Thus, tutorials or instructions could stimulate and foster task knowledge by using analogy between known systems and new systems and with hands-on practice to build experience. For instance, novice users of Internet search engines should be able to:
Each of these objectives could be broken down into associated task knowledge. For instance, "use basic search tools/features" requires task knowledge in the application of Boolean and proximity operators, and "refining a search" requires task knowledge in identifying broader than, lesser than, and related terms for a topic. See, for instance, tasks in the "Scan Results List" procedure in Table 3.
Designing support systems that could take into account the task knowledge of novices might take at least three different approaches. One might focus on designing search engine Web sites to be more intuitive to novices, trying to link directly to their current mental models. Another might follow a scaffolding approach by supplementing a search engine with a framework to walk novices through tasks for which they possess incomplete knowledge. A third might bypass both and create an AI agent that takes the novices' weaknesses into account and does the work.
More intuitive Web sites for search engines might recognize novices' lack of task knowledge by anticipating their actions and prompting them with cues. For instance, knowing that novices do not scroll past one screen, designers could utilize this knowledge to create split screens for more but briefer results lists; generate pop-up windows with more results; or create blinking applets, which encourage viewers to scroll further down or click on a link for additional results.
A facilitative design could provide novices with a framework to scaffold learners by providing guidance that recognizes weaknesses in novice mental models. For instance, when too many or two few results were returned for a result, explanations and suggestions could be offered. With such facilitation, novices could build task knowledge as they search.
A third approach might be to take weak task knowledge into account by simply compensating for it. Such a design would account for weaknesses in searching by emulating a task model and walking novice searchers through subprocedures. Novices could be presented with a search window that asks them to make choices and complete a form asking how much time they have; how deep they want to go; what the broader-than, narrower-than, or related terms are; and so on. These needs could be carried out, though it would replace, if not subvert, any learning in the process.
The primary aim of this research was to determine whether novice mental models could be identified using methodologies from a HCI school of thought. This research project has shown that TKS theory with ACTA and KAT is useful for representing mental models of novices, with some reservations. ACTA was designed for interviewing experts, thus needs to be adapted slightly to accommodate novices (for instance, focusing on novice users' immediate reaction and trial-and-error attempts rather than on past experience). KAT is very useful for structuring task knowledge. A final model for representing mental modes with TKS is under development, a sample of which is shown in Table 3.
The research also sought to compare novice to expert mental models. ACTA interviews of novices uncovered what appear to be weaknesses, but an ACTA of experts is needed to verify assumptions. A thorough project to undertake research on expert models is forthcoming [10]. Another goal of this research was to consider how task knowledge structures might influence support systems for novice mental models. Along with the suggestions described, we are investigating possible design scenarios for a facilitative approach.
1. Brandt, D.S. Constructivism: Teaching for understanding of the Internet. Commun. ACM 40, 10 (Oct. 1997), 112117.
2. Brandt, D.S. and Uden, L. Simplified method of eliciting information from novices. Educational Tech. 42, 1 (Jan.Feb. 2002), 5255.
3. Johnson, P. Human Computer Interaction: Psychology, Task Analysis and Software Engineering. McGraw-Hill Book Pub. London, U.K., 1992.
4. Johnson-Laird, P.N. The Computer and the Mind. Fontana Press, London, U.K. 1988.
5. Jonassen, D.H. and Henning, P. Mental models: Knowledge in the head and knowledge in the world. Educational Tech. 39, 3 (MayJune 1999), 3742.
6. Militello, L.G. and Hutton, J.B. Applied Cognitive Task Analysis (ACTA): A practitioner's toolkit for understanding cognitive task demands. Ergonomics 41, 11 (1998), 16181641.
7. Norman, D. Some observations on mental models. D. Gentner and A. Stevens, Eds. Mental Models. Lawrence Erlbaum, Hillsdale, NJ, 1987.
8. Notess, G. Measuring the size of Internet databases. Database 20, 5 (Oct.Nov. 1997), 6972.
9. Otter, M. and Johnson, H. Lost in hyperspace: metrics and mental models. Interacting with Computers 13, 1 (2000), 140.
10. Uden, L. and Brandt, D.S. Learning with technology: A preliminary study. Online Info. Review 24, 4 (2000), 334337.
Research was made possible by a Visiting Honorary Research Fellowship in the Department of Computing at Staffordshire University.
Table 1. ACTA Knowledge Audit.
Table 2. Subgoals and procedures for using Internet search engines.
Table 3. Scan results list procedure and associated task knowledge.
©2003 ACM 0002-0782/03/0700 $5.00
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2003 ACM, Inc.
No entries found