The Web was designed to be a "pool of human knowledge, which would allow collaborators to share their ideas and all aspects of a common project" [2]. This goal for the Web makes it an ideal distribution system for the Decision Support Systems (DSS) of the future. A DSS is an object class that includes software applications, mathematical subprograms or model solvers designed to interact with humans to facilitate decision-making. The determination as to whether an application is a DSS is in the hands of the application developer. However, the definition of a DSS is user-based; that is, repeated discretionary use by end users represents the final authority on the value of a DSS. This definition includes expert systems as DSS, should those expert system developers choose to deploy the systems on the Web in a protocol-compliant format.
At present the Internet provides access to hundreds of gigabytes each of software, documents, sounds, images, and many other types of information [6]. It is only a matter of time before vendors begin deploying DSS on the Web on this type of scale. For DSS to be efficiently discovered on the Web, a protocol that allows DSS information to be found and transmitted must be established. In addition, a mechanism to provide a consistent organized view of DSS information is necessary. A DSS resource discovery system will need to identify a resource, collect information about it from several sources, and convert the representation to a format that can be indexed for efficient searching [6]. The purpose of this article is to propose a protocol suite that will facilitate the discovery of DSS on the Web by allowing Web pages containing DSS to be easily distinguished from other Web pages. In addition, the protocol provides a common format for describing DSS, such that autonomous intelligent search agents can identify DSS that meet specific user-defined requirements.
Three recent articles have examined how DSS could be deployed on the Web. The first two, by Bhargava et al. [4, 5], present DecisionNet, a prototype of a brokered system that facilitates transactions between providers and consumers of decision technologies. Under this system, all DSS developers must submit their DSS for inclusion in the DecisionNet system and all DSS users must register in order to use DSS from their system. This approach greatly simplifies the deployment of DSS on the Web. First, registered DecisionNet subscribers do not have to download the DSS they want to use; instead they access the DSS remotely and run them on the DSS provider's platform. This allows users to utilize a DSS even if they do not have the hardware or software necessary to run it. In addition, specialized search agents or browsers are not necessary, since DSS search is only necessary within the index of registered DSS.
The other approach for DSS deployment is an open system that would allow DSS to be distributed on individual Web pages, as other types of data are currently being offered. Goul et al. [10] propose a set of requirements for a protocol suite that will allow the deployment of Open DSS on the Web. A protocol based on these requirements would utilize specialized Web search agents (robots, spiders, wanderers, and so forth) to provide automated intelligent discovery of DSS pertaining to a specific decision-making or problem-solving situation.
The open protocol contributes to the disintermediation of the Web. It allows individual users or automated intelligent search agents to find any DSS compliant with the protocol, not only those posted with an individual broker. However, the open system has no mechanism to control what is put on the Internet and portrayed as a DSS [10]. It is possible that the eventual system for deploying DSS on the Web will combine the two approaches currently being proposed. Brokers could be used to provide access to high-quality, tested DSS while other DSS could be provided at individual Web sites. The protocol described in this article is capable of supporting both types of DSS deployments.
The Open DSS protocol requires DSS builders to supply information that defines the purpose of a DSS, the computing environment, the data inputs, the data outputs, and other information about their DSS. This information would be added by the DSS builder at the time of creation of the DSS and can be used to aid end user DSS discovery. Robots, wanderers, and spiders will be used to identify addresses and build an index of compliant DSS. This index can then be searched by end users or end-user agents to identify DSS which meet a set of end-user stipulated search parameters.
Goul et al. [10] proposed a set of six requirements that serve as the basis for the Open DSS protocol developed in this article. These requirements are:
These requirements provide a basis for the development of an Open DSS protocol.
The previous section defined the six requirements that are key to creating an Open DSS protocol. In order to meet these requirements, a set of preliminary protocol specifications is proposed. The Open DSS Protocol is a general protocol that provides facilitated access to DSS utilizing the existing Internet application layer protocols HTTP and HTML [3, 12], and consists of two layers. The first layer in the Open DSS protocol is the Metainformation Layer. It indicates the Web site contains a DSS and includes all of the information necessary to completely explain the DSS. The second layer is the Transaction Processing Layer. This layer is responsible for any transactions that are necessary before the software will be made available to the client. The detailed requirements for each of these layers are discussed in the following sections.
The Metainformation Layer. When objects are transferred over the Internet, information about them ("metainformation") is transferred in HTTP headers. The Open DSS protocol utilizes a set of specialized headers to provide basic information about the DSS to the automated intelligent search agents. The robots, wanderers, and spiders will traverse the Web requesting entity-header information only (using the HEAD command) to determine whether the Web site contains a DSS. Since, by convention, unrecognized HTTP headers and parameters are ignored, other search agents can also access DSS Web sites without being affected by the specialized DSS headers.
The header information provided by DSS providers must be in a consistent format so that the automated DSS search agents can index them correctly. The first item in the header should indicate that the site contains a DSS. This would be accomplished by a CONTENT-TYPE metainformation label:
In addition to specifying the content_type, every DSS will be required to have a title, a list of keywords, and a description.
The remaining metainformation will define the functionality of the DSS being offered, the user-site requirements and other information necessary to evaluate the DSS. The metainformation related to DSS functionality was selected based upon model management research. The goal of this research has been to develop techniques to select or construct appropriate models to be run so as to provide the appropriate answer [7]. Researchers have conducted research on the storage, representation, utilization and manipulation of models. To date there has been no universally agreed-upon method for representing and specifying DSS models. However, at a minimum, a DSS representation scheme should include descriptions of the stimuli (inputs) and responses (outputs), state (data structures), and procedures (control structures) [1].
The user site requirements should include information on the hardware requirements (computing platform), software requirements (operating system or application needs), and any specific user skills required to use the DSS. Finally, the metainformation must contain all other information necessary to purchase and download the DSS. This would include information on the DSS's cost, its references, related DSS, and vendor information. A list of the variables that should be used to define the specification information is shown in Figure 1.
Not all of these metainformation variables are necessary to define a given DSS. The individual variables can be defined in any order and unknown variables will be ignored by the search agents. Metainformation variables are defined as follows:
The metainformation variables should be defined in a standard HTML document [12]. HTTP places no limits on the number of extension headers that can be defined in the metainformation header, thus the header information requirements can easily be expanded as the Open DSS protocol evolves.
The Transaction Layer. A transaction layer is required in the Open DSS protocol to define the standard information that will be exchanged when an end user decides to purchase or download a DSS. This information will help DSS developers to obtain feedback to improve their DSS and to guide future DSS research. The transaction layer will include log-in and registration templates, which DSS providers use to gather data about customers. When registering, users will be asked to enter their names, email addresses, phone numbers and location in the world, along with information on the planned use for the DSS and the expected frequency of use. This would allow DSS builders to contact individual DSS users to obtain feedback on product performance and to provide users with DSS update information. Figure 2 shows a standard user registration form.
In addition to the standardized registration form, many DSS developers will have additional transaction processing needs. Many DSS available on the Web would likely be offered for a fee. Including some type of billing services in the transaction layer would allow payment capture, invoicing, and activity tracking. On the Web, the users traditionally specify their payment preference at the time of purchase, through a monthly billing or by credit card. Credit card processing would require the following features:
Currently, there are several commercial products available that could provide the transaction services necessary for DSS providers. One such product, Netscape Publishing System, provides these features and also supports the creation and maintenance of the Web site where the DSS would be offered [11].
Once DSS are deployed in a specific format, end users require a mechanism for discovering them. It is proposed that DSS deployed on the Web be indexed utilizing autonomous intelligent search agents such as robots, spiders, and wanderers. In the Open DSS protocol, these Web search agents would act as federated facilitators. In a federated system, agents do not communicate directly with each other. Instead, the agents communicate only with system programs called facilitators or mediators [9]. This communication consists of the agents' needs, abilities, application level information, and various requests. Under the Open DSS protocol, the intelligent search agents will continuously explore the structure of the Web by examining the header information located in HTML Web pages. When the agent encounters a DSS, it stores the header information and the DSS location in the DSS index.
End users would be allowed to specify the desired DSS attributes to another search agent and then initiate a search against the index. Depending on the sophistication of the search agent, either the search agent or the end users will then evaluate the header information and retrieve the full specifications for the "best" DSS. Finally, end users will examine this specification information and select the DSS that best meet their needs.
The advantage of this approach is that it minimizes the impact the search agent has on the Web sites it visits. A single agent is responsible for examining and downloading header information only, minimizing the amount of information it requires Web sites to provide. End users need only search the DSS index and specific candidate DSS sites to find appropriate DSS tools. The consideration of the potential impact of autonomous agents is one of the important requirements for building ethical Web agents [8].
Future DSS search agents may have the intelligence necessary to select candidate DSS from the index based on the header information and then return to the appropriate Web site to retrieve the full specification information. These agents should be designed such that they could then examine the specification information and rank the DSS based on how well they meet the users' needs. In addition, future intelligent agents should be able to select a set of DSS that could be integrated to solve large problems. These intelligent search agents can be considered DSS themselves, because they will aid end users in the overall decision-making process.
Consider a program that implements a set of forecasting models. This type of program is capable of performing forecasting calculations using any type of time series data. To set up this program so that it could be accessed using the Open DSS protocol, the DSS builder would need to create an HTML document containing appropriate metainformation and user registration. An example of an appropriate HTML document for this application is shown in Figure 3.
Once this DSS Web page is placed on the Web, end users will be able to discover it using an autonomous DSS search agent. The data entry form to browse the Web would allow the user to specify keywords, hardware types, and cost. Depending on the implementation, users may enter their own input (as in keywords) or they may choose from preselected entries (representation). Such data entry would be converted by the browser to standards within the protocol suite. For this example, the following steps would be performed:
At this point, the actual DSS search is not unlike standard text-based searches currently available on the Internet. However, if DSS builders follow the Open DSS protocol, end users will be better able to find DSS (as opposed to text pages that contain the desired keywords).
This article has proposed a preliminary specification for an Open DSS protocol that could be used to facilitate the discovery, integration, and operation of DSS. One important advantage to the proposed Open DSS protocol suite is that it is mapped to existing standards for HTML, HTTP and for robots, wanderers, and spiders. Thus the Open DSS protocol suite will be compatible with protocol suites already operating on the Web. In addition to being compatible with existing protocol suites, the proposed specification is consistent with currently prevailing approaches prescribed in the literature for deploying DSS on the Web. The two-layer model is also consistent with layered architectural designs for distributed systems. The protocol is designed to be highly efficient, and it is extensible in the future due to the use of HTTP headers for encoding DSS metadata. A protocol-compliant DSS can be referenced by existing text-oriented search engines, and allows for the development of specialized DSS search engines. The use of specification variables enables the encoding of significant detail that can be accessed by those DSS-oriented search engines. The transaction processing model in the transaction layer is highly flexible, supporting many approaches to electronic commerce in the deployment of DSS. For example, both models of DSS utilization are feasible in the model: download or utilize the DSS provider's server(s).
With the deployment of DSS on the Web, many fundamental tenets of the theory of DSS are in need of review. Those tenets will likely be extended. For example, the engine that will be used to search for DSS has been described as a DSS! This extension implies that the rich history of empirical approaches to the study of DSS is relevant to the nascent area of Web search. Ongoing work in this area will next require an examination of the sufficiency and completeness of the proposed protocol for representing commercial DSS, development of prototype search engines, and empirical examination of the efficacy of those systems.
For additional Open DSS Protocol information, see www.public.asu.edu/~dgregg/dssprotocol.
1. Banerjee, S., and Basu, A. Model type selection in an integrated DSS environment. Decision Support Systems 9, 1 (1993), 7589.
2. Berners-Lee, T., Cailliau, R., Luotonen, A., Frystyk Neilsen, H., and Secret, A. The World Wide Web. Commun. ACM 37, 8 (Aug. 1994), 7682.
3. Berners-Lee, T., Fielding R., and Frystyk, H. Hypertext Transfer ProtocolHTTP/1.0: working paper. HTTP Working Group, (Sept. 4, 1995); www.w3.org/pub/www/protocols/draft-ietf-http-v10-spec03.html
4. Bhargava, H., Krishnan, R., and Mueller, R. Decision support on demand: On emerging electronic markets for decision technologies. Decision Support Systems 19, 3 (Mar. 1997), 193214.
5. Bhargava, H., Krishnan, R., and Mueller, R. Electronic commerce in decision technologies: A business cycle analysis. International Journal on Electronic Commerce. To be published.
6. Bowman, C.M., Danzig, P.B., Manber, U., and Schwartz, M.F. Scaleable Internet resource discovery. Commun. ACM 37, 8 (Aug. 1994), 98107.
7. Blanning, R.W., Hosapple, C.W., and Whinston, A.B., Eds. Decision Support Systems, Special Issue on Model Management Systems 9, 1 (Jan. 1993).
8. Eichmann, D. Ethical Web agents. In Proceedings of the Second International World-Wide Web Conference: Mosaic and the Web. (Oct. 1820, Chicago, IL), 1994, 313.
9. Genesreth, M.R., and Ketchpel, S.P. Software agents. Commun. ACM 37, 7 (July 1994), 4853.
10. Goul, M., Philippakis, A., Kiang, M., Fernandes, D., and Otondo, R. Requirements for the design of a protocol suite to automate DSS deployment on the World Wide Web: A client/server approach. IDecision Support Systems 19, 3 (Mar. 1997), 151170.
11. Netscape publishing system white paper; www.netscape.com/comprod/products/iapps/capps/pubsys_white_paper.html
12. Raggett, D. HTML 3.2 reference specification: working paper. (Nov. 5, 1996); www.w3.org/pub/WWW/TR/PR-html32-961105
©1999 ACM 0002-0782/99/1100 $5.00
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
The Digital Library is published by the Association for Computing Machinery. Copyright © 1999 ACM, Inc.
No entries found