First Monday


Using an Intelligent Agent to Enhance Search Engine Performance

by JAMES JANSEN

Abstract
The amount of information available via networks and databases has rapidly increased and continues to increase. Existing search and retrieval engines provide limited assistance to users in locating the relevant information that they need. Autonomous, intelligent agents may prove to be the needed item in transforming passive search and retrieval engines into active, personal assistants. This proposal explores the quantity of information available that is driving the need for improved search and retrieval engines. It then reviews current information retrieval literature and agency literature. Following these reviews, it proposes that the combination of effective information retrieval techniques and autonomous, intelligent agents can improve the performance of short-term information retrieval in an existing search or retrieval engine. A review of the current status of agents in various areas including information retrieval is also presented. The proposal then presents the objectives of this research, the methodology to achieve these objectives, and concludes with the contributions of this research and a short summary.

Content
Introduction
Information Abundance
Information Retrieval
Agents
Objectives
Current Status
Agent Systems
Commercial Agents
Performance Evaluations
Procedures
Contributions of Research
Conclusion
Notes

Introduction
The amount of information available via networks and databases has increased and is still rapidly increasing. Existing search and retrieval engines provide limited assistance to users in locating the relevant information they need. Autonomous, intelligent agents may prove to be the needed item in transforming passive search and retrieval engines into active, personal assistants. I propose that the combination of effective information retrieval techniques and autonomous, intelligent agents can improve the performance of an existing search or retrieval engine. This paper explores the increasing quantity of information available and the need for improved search and retrieval engines. It then reviews current information retrieval literature and agency literature. The paper then presents the objectives of this research along with the methodology to achieve these objectives. The paper concludes with the contributions of this research and a short summary.

Information Abundance
The World Wide Web (Web) is one of the largest publicly available databases of documents, and it is a good testing ground for most retrieval techniques. The Web organizes information by employing a hypertext paradigm. Users can explore information by selecting hypertext links to other information. As the Web continues its explosive growth, the need for searching tools to access the Web is increasing. Yahoo! is the big name in Web directories. A pair of Stanford graduate students founded Yahoo! in 1995. Recently, a host of new search and directory sites offer a wide range of Web-searching services [ 23 ]. Examples include Alta Vista, InfoSeek, Open Text and Excite. However, these search engines are not as sophisticated as one might expect.

For example, Alta Vista presents the documents that the search engine expects one would find most relevant at the top of the list. The search engine ranks documents from highest to lowest based on:

These other desirable search criteria are: number of times the terms appear, proximity of the terms to each other, and proximity of the terms to the beginning of the document. Because of the details of how the scoring occurs, one might experience some unexpected results. The Alta Vista algorithm gives a higher score to unique and unusual words. For instance, if one enters albatross boat fishing in the search box, documents that had many instances of albatross near the beginning or in the title of the document would have a high score. These documents might reduce the priority of documents that had all three search terms together [ 2, 36 ]. Obviously, this is not the best search method. Clearly, one needs to add more personal services to optimize information retrieval for the Web [ 18 ].

The explosive growth of information is not only occurring on the Web but also with on-line databases. The number of on-line databases increased from 5000 in 1994 to 5800 in 1996 [ 37, 38 ].This number is in addition to the 4600 batch databases that are available via computer networks[ 38 ]. Databases are also getting larger. NCR recently opened the world's largest data warehouse. NCR's data warehouse has a capacity of 11 terabytes, which is equivalent to 2.75 billion pages of text, or roughly enough to fill 220,000 four-drawer filing cabinets [ 4 ]. The PC revolution makes these digital libraries very accessible. There is so much access to information that it is turning into acommodity as the law of supply and demand takes hold. For example, Dialog was one of the first in the business of selling electronic data. It is still one of the biggest with sales of about $439million in 1993. However, profits are flat as the price for information decreases. For example, airline reservations that Dialog once sold for $48, America On-line now sells for $2[ 61 ]. The problem has turned from one of having information available to one of rapidly getting to the information that one needs.

As with search engines on the Web, on-line databases have problems with their retrieval engines. Research shows that users have a number of problems interacting with on-line databases. Yee[ 60 ] reviewed over 150 studies in this area. She summarizes the obstacles facing the users of on-line databases. These obstacles include: finding appropriate subject terms, a large number of hits along with failure to reduce the retrieval sets, zero hits and failure to increase the retrieval sets, and failure to understand the cataloging rules. In addition, lack of understanding of the indexes, types of files, and the basic database structure leads to the use of articles (i.e., the, a, etc.), stop words, placing the author's first name before the last name, and hyphenation problems.

Online retrieval systems are powerful and efficient at locating matching terms and phrases. They are also currently dumb, passive systems that require resourceful, active, intelligent human users to produce acceptable results. Some have suggested that the solution to information retrieval problems is to better index the Web documents and database records with items such as more key terms and conceptual indexing[ 5 ]. Enhancing millions of web pages, documents and records would be extremely costly; therefore, creating better search and retrieval engines provides a more realistic solution to the existing problems. For example, users currently employ various search techniques to fulfill their information retrieval needs. These techniques include obtaining information from footnotes and references in journals and books, identifying core journals in a discipline, searching for known authors and subjects, and browsing the materials that are physically collocated with materials located earlier in a search. These techniques play important roles in the information-seeking activities of users[ 5]. A solution is for the search and retrieval engines to take advantage of the information in these search techniques that aid the user in locating the needed information.

Information Retrieval
Given that there exists a set of documents and a person who has an interest in the information in some of them, one can define optimal information retrieval as: Find all the relevant and none of their relevant documents[ 34 ]. The documents that contain information of interest are relevant. The other documents are not. A document can be a page of text, an article, a Web site, etc.

There are three major information retrieval paradigms[ 57]: statistical, semantic, and contextual. The first approach emphasizes statistical correlations of word counts in documents and document collections. Salton [ 45, 46, 48, 50 ] describes the use of statistical schemes such as vector space models for document representation and retrieval. The Smart system [ 10 ] is an example of a textprocessing and retrieval system based on the vector processing model. Another example is Latent Semantic Indexing (LSI) [ 13 ], which captures the term associations in documents. The semantic approach to information retrieval views documents and queries as representing some underlying meaning [ 49, 53 ]. It emphasizes natural language processing or the use of artificial intelligence queries. The third approach takes advantage of the structural and contextual information typically available in retrieval systems. For example, this could involve the use of a thesaurus and encoded relationships among terms. One could also take advantage of context and structure generally available from the document terms. Salton [ 45 ] has shown, however, that this approach does not necessarily improve retrieval performance.

There are two accepted standards of performance for comparing and evaluating retrieval systems, recall and precision [ 34, 47, 51, 52 ]. The definitions of these performance standards are:

Recall = Relevant Documents Retrieved /Total Number of Relevant Document
Precision = Relevant Documents Retrieved /Total Number of Retrieved Documents

There are other views of evaluating performance [ 47, 51, 57 ]. Information retrieval is almost always part of some larger process of information use. One can evaluate systems based on their support of these larger processes. Sense making is building an interpretation of the situation or queries to understand the information. Design is building an artifact from the information. Decision making is building a decision and its rationale based on the information. Response tasks are finding information to answer a query [ 42 ].

Agents
Definition
There are several definitions of agents [ 15, 19, 43, 44 ]. One can also describe rather than define agents in terms of their task, autonomy, and communication capabilities. Some of the major definitions and descriptions of agents are:

Agents are semi-autonomous computer programs that intelligently assist the user with computer applications. Agents employ artificial intelligence techniques to assist users with daily computer tasks, such as reading electronic mail, maintaining a calendar, and filing information. Agents learn through example-based reasoning and are able to improve their performance over time [ 44 ].

Agents are computational systems that inhabit some complex, dynamic environment. They sense and act autonomously in this environment. By doing so, they realize a set of goals or tasks [ 27, 28, 29 ].

Agents are software robots. They can think and will act on behalf of a user to carry out tasks. Agents will help meet the growing need for more functional, flexible, and personal computing and telecommunications systems. Uses for intelligent agents include self-contained tasks, operating semi-autonomously, and communication between the user and systems resources [ 3, 16 ].

The definition and description of an agent for this research are: Agents are software programs that implement user delegation. Agents manage complexity, support user mobility, and lower the entry level for new users. Agents are a design model similar to client-server computing, rather than strictly a technology, program, or product [ 20 ].

Issues
Two issues concerning agents are trust and competence [ 8, 14, 28, 32 ]. Concerning trust [ 8, 28 ],the user and other members of the user community must be able to trust that the agent does only what the user wants done. The user must feel comfortable delegating tasks to the agent. As for competence, the agent must first acquire the skills to accomplish the delegated tasks [ 6, 7, 9, 28, 39, 40 ]. The agent must also be able to decide when to help the user and how to help the user.

Architecture
There are three major paradigms for building agents [ 28, 31 ]. The first approach makes the agent an integrated part of the end-program. The advantage here is that the user trusts the agent because the rules are set. The problem is with competence. A combined agent and end-program requires too much insight from the user. The user must have the knowledge to effectively employ the agent. The second approach is a knowledge-based approach, where the agent has extensive domain-specific information about the application. Competence is a problem with this approach because it requires a huge amount of knowledge from the knowledge engineer. Trust is also a problem since the agent is usually autonomous from the start, which gives users a feeling of loss of control and lack of understanding [ 56 ]. The final approach is a learning approach, where the agent has some knowledge of the domain but learns what the user would like it to do based on user actions. The learning approach has the advantages of the other two approaches while minimizing their disadvantages. The learning approach is the architectural paradigm that this research will use.

Objectives
The objective of this research is that an autonomous, intelligent agent can "rapidly" customize a search or retrieval engine query's result. The agent uses both user preferences and information content of the document and query. The end product of this research will be an autonomous, intelligent agent that resides with an existing search or retrieval engine. This combination will result in improved information retrieval performance for the user. Specifically, the goals of this research are:

To improve the information retrieval performance of a search or retrieval engine based on specified, measurable attributes and relative to the increased cost of adding the agent.

To develop an autonomous, intelligent agent that will reside with an existing search or retrieval engine. The addition of the agent should be as transparent as possible to the existing user interface or front-end of the engine. The agent will monitor the user's actions to prioritize the remaining query results. The agent will learn based on the user's preferences and information content of the queries and documents.

To develop a method for rapid agent learning of user preferences during each user session on the engine.

To integrate an information retrieval algorithm, a user preference algorithm, an existing search or retrieval engine, and a agent.

Current Status
Negroponte [ 35 ] and Kay [ 21 ] were among the first to recognize the potential value of agents. A number of researchers have explored the use of agents for information filtering, cataloging, and delegation [ 34 ]. Information filtering is similar to information retrieval. In information retrieval, one views the user actively searching for relevant information in a mass of largely irrelevant information. With information filtering, one views the user as largely passive as mostly relevant information flows past the user [ 57 ]. The following three subsections provide some specific examples of current agent systems.

Agent Systems
Email Systems
Tapestry is an experimental mail system developed at the Xerox Palo Alto Research Center intended as a replacement for current e-mail systems. In addition to content-based filtering, the Tapestry system supports collaborative filtering. Collaborative filtering simply means that people collaborate to help each other perform filtering by recording their reactions, or annotations, to documents they read. When a Tapestry user installs a filter that uses annotations, the Tapestry system returns documents matching that filter. One can think of Tapestry filters as agents running continuously. The primary technical innovation in Tapestry is an efficient algorithm for implementing filter queries that have predictable semantics [ 17 ].

Decision Support Systems
Most Group Decision Support Systems do not include either basic or automated information retrieval capabilities to aid users in making better decisions. Participants often rely on the meeting facilitator for their information requirements. The facilitator may have difficulty comprehending the complex information requirements of the group members. Researchers at the University of Mississippi have developed a prototype knowledge-based information filtering agent that supports a group decision support system. The prototype allows group members to query an on-line knowledge base of facts using normal English syntax. This relieves the user of the need to know the location of relevant information or how to retrieve it. A case study of student groups using this information retrieval agent shows the feasibility of the technique for the development of group decision support systems [ 1 ].

User Interfaces
The area of user interfaces is an especially fruitful area for the employment of agents [ 22, 24, 26, 54, 55 ]. Researchers at the IBM Intelligent Agent Group [ 20 ] state that soon user interfaces without agents will no longer be viable in the marketplace. Agents can implement a style of interaction referred to as indirect management. Instead of commands and direct manipulation, the user and the system are in a cooperative process. Both the user and computer agents perform communication, monitor events, and perform tasks [ 26, 28 ]. The Information Visualizer is an experimental system to develop a new user interface paradigm for information retrieval. The Information Visualizer attempts to utilize advanced graphics technology to lower the cost of finding and accessing information. The Information Visualizer uses four broad strategies, which are: making the user's immediate workspace larger, enabling user interaction with multiple agents, increasing the real-time interaction rate between user and system, and using visual abstraction to speed information assimilation [ 42 ].

Teleconferencing
M is a software assistant that uses a society of agents working together. M attempts to recognize, classify, index, store, retrieve, explain, and present information in a desktop multimedia conferencing environment. M is a software system that integrates multiple reasoning agents. The agent's collaborative results serve to assist a user working together with other individuals in an electronic conference room [ 41 ].

Telecommunications
Guilfoyle [ 11 ] sees network management as the biggest application to be affected by agents in the short to medium term [ 9 ]. Some companies are already placing "embedded intelligence" into their network products. Guilfoyle states that most hardware vendors will embed the intelligent agents in servers, routers, and hubs. SynOptics Communications is strongly involved in agent development, especially in its global enterprise management architecture. This architecture is intended to eventually manage communications with a single software application. Besides network management, agents will also become an integral part of messaging systems. PersonaLink is a messaging service by AT&T that uses agents [ 11 ].

Calendar Systems
Calendar APprentice (CAP) is a learning assistant that performs calendar management. CAP learns its users' scheduling preferences from experience. Mitchell [ 34 ] has studied the benefits of CAP based on approximately 5 user-years of experience. CAP has learned an evolving set of several thousand rules that characterize scheduling preferences for each of its users. Considering this experience, machine-learning methods may play an important role in future personal software assistants [ 34 ].

Entertainment
Many forms of entertainment could benefit from the casting of intelligent agents as entertaining characters [ 30, 31, 39, 55 ]. ALIVE is an example of such a system. ALIVE allows users to enter a virtual world and use full-body images to interact with animated agents. In ALIVE, a user sees his or her own image surrounded by three-dimensional agents and objects in a screen of approximately 16 by 16 feet. ALIVE implements different virtual worlds that the user can switch by pressing a virtual button. Different agents inhabit each world. Current agents include a puppet, a dog, a hamster, and a predator. The behavior of an agent depends on its traits, the location, and the gestures of the user and other agents. For example, the hamster will avoid objects, follow the user around, and beg for food.

WebWanders
Web Robots, wanderers, and spiders are all names for programs that automatically traverse the Web. Next to macros, they are the most successful class of agent systems. Two examples are Harvest and Searchbots. Harvest is a resource discovery robot that is part of the Harvest Project. Harvest runs from the University of Colorado and also from Texas A&M. Harvest's motivation is to index topic specific collections rather than to locate and index all HTML objects that it can find. Also, Harvest allows users to control the enumeration several ways, including stop lists, depth limits, and count limits. Therefore, Harvest provides a much more controlled way of indexing the Web than is typical of other robots [ 19 ]. CIG Searchbots are a very simple example of cooperative information gathering, which is a multi-agent approach to information retrieval. The CIG Searchbots is not a database search engine tool. To satisfy one's query, multiple agents actually perform search at heterogeneous remote sites via the Web. Some of the search methods may include using existing database search engines. Domain experts determine what sites to search and the path to the best solution. The best solution is the one with the lowest search cost [ 12 ].

FAQ Systems
CYLINA (CYberspace Leveraged INtelligent Agent) is an agent system that gains information through interactions with a large population of network users. Instead of depending on the efforts of a few knowledge engineers, CYLINA relies on small, incremental contributions from a large population of experts. CYLINA assumes that the sheer volume of interaction will allow the system to acquire a significant amount of knowledge in a short amount of time. Auto-FAQ is an experimental system currently under development at GTE Laboratories that uses the CYLINA paradigm. Auto-FAQ attempts to make information typically found in USENET News FAQs much more accessible. Auto-FAQ is a question-answer system. Users ask questions in natural language forms. These questions index directly into the system's information base [ 59 ].

USENET Archives
Newt [ 57 ] is an example of an information filtering system utilizing a society of agents that inhabit the user's computer. Each agent is a user profile. Each profile searches for documents that match itself and recommends these documents to the user. The user can provide feedback to the agent for the documents recommended. User feedback causes two effects. One, it changes the fitness of the profiles. If the user provides positive or negative feedback for a document, the fitness of the profile that retrieved that document is either increased or decreased. Second, user feedback modifies the profile. Therefore, each agent learns during its lifetime. The population continually adapts to the changing needs of the user.

Graphical Editor
Mondrian is a graphical editor that can learn new graphical procedures through programming by demonstration. An agent records the steps of a procedure while the user demonstrates the sequence of a command. The agent generalizes a macro that the user can use on "analogous" examples. The generalization heuristics make this agent different from conventional macros, which can only repeat an exact sequence of steps [ 25 ].

Commercial Agents
AppleSearch
AppleSearch is an agent system that searches and retrieves text from computers linked together by the AppleShare, Apple's file sharing application, or by System 7's personal file sharing capabilities. Up to 50 users can operate on a network as AppleSearch clients. AppleSearch uses agents, using Apple's XTND technology, to examine text and to read and index documents that exist in a variety of formats [ 58 ].

NewWave
Hewlett-Packard's NewWave agent feature is arguably the oldest commercial agent system. It provides simple intelligent macro capabilities. NewWave uses the agent feature to automate simple tasks, and provide department-level or company-wide customization of interfaces It also uses agents to link files with their required application and to link files together [ 33 ].

OpenSesame!
The OpenSesame! learning interface agent uses hybrid neural network and knowledge-based systems technology to observe its user's actions in the Macintosh System 7 environment. OpenSesame! will customize the interface, automate regular tasks, and make suggestions for easier ways to carry out operations [ 19 ].

Performance Evaluations
Although there are various agent systems that accomplish many tasks, there is little performance evaluation of agent systems compared to non-agent systems. Since many agent systems are build from the ground up, it is extremely difficult to perform a performance evaluation. Many of the reported performance enhancements of agent systems are not statistical evaluations. Instead, human factors or human information processing characteristics are the bases for these expected performance improvements [ 1, 34, 55 ]. Clearly, other factors could prevent the expected performance increase from occurring. Performance evaluations would be useful and are necessary for validating the benefit of agents.

Procedure

  1. Obtain access to a search or retrieval engine for an information database.
  2. Build an autonomous, intelligent agent that learns from both user actions and from the information content of queries and documents. Examples of user actions from which the agents could learn would be: saving the location of a site or query result, printing a document or query result, the time spent on a document, and query results that the user passes up.
  3. Integrate the agent, the search or retrieval engine, the user preference algorithm, and an information retrieval algorithm.
  4. Compare the performance of a group of users using the original search or retrieval engine versus their performance using the agent-enhanced engine.

Contributions of Research
Development of an autonomous, intelligent agent that uses a user preference algorithm based on short-term user preferences. Existing information filtering agents develop profiles of user preferences over an extended time-frame. For this project, the agent would immediately begin to make decisions based on the information available, regardless of the quantity.

Linking of such an agent and algorithm to an existing application. Most current agent systems are replacements for existing applications; therefore, they do not lend themselves to performance evaluations against traditional non-agent systems.

The use of an existing front-end may address the trust and competence issues. The original query defines the scope and temporal existence of the agent. When the user-session ends, the agent "releases" all knowledge of the user's preferences. The user can also make a determination on the competence of the agent, since the agent's performance (i.e., prioritizing of the remaining query results) is immediately apparent to the user.

Provide performance comparison testing to validate or invalidate the benefit of this approach in information retrieval.

Conclusion
There is an increased amount of information available on the Web and an increase in the number of on-line databases. This information abundance increases the complexity of locating relevant information. Complexity drives the need for improved search and retrieval engines. Current search and retrieval engines are primarily passive instruments. Intelligent agents may be the way to improve search and retrieval engines, making them active personal assistants. The combination of the search and retrieval engines, the agent, the user preference algorithm, and the information retrieval algorithm addresses the trust and competence issues of agents. The user controlling the parameters and temporal existence of the agents via the query of the search and retrieval engine ensures an element of trust. The user gets continual feedback from the agent via the agent's prioritizing of the remaining query results, which addresses the competence issue. Although there are several agent systems currently, there is no performance data comparing the agent-system with a traditional, non-agent system. The use of an existing search and retrieval engine with the addition of an agent will allow for performance measurements. This technique also permits the continued use of a known user interface for the engine.End of article

The Author
Major Jim Jansen is currently assigned to the Department of Electrical Engineering and Computer Science at the United States Military Academy. He is also a Ph.D. Candidate at Texas A&M University. Major Jansen has a B.S. in Computer Science from the United States Military Academy. Additionally, he has a Master of Computer Science from Texas A&M University and a M.S. in International Relations from Troy State University. He has served in numerous military communication assignments in the US and Europe. His research interests and expertise include networks, information retrieval, software agents, and computer-human interaction. He is currently conducting research in the combined use of software agents and information search engines.

Email: jansen@exmail.usma.edu

Web Site: http://www.eecs.usma.edu/usma/academic/eecs/instruct/jansen/
Room 1123, Thayer Hall, Department of Electrical Engineering and Computer Science, United States Military Academy, West Point, New York, 10996, Office: (914) 938-5559.

Notes
1. Milam Aiken and Chittibabu Govindarajulu, 1994. "Knowledge-based information retrieval for group decision support systems," Journal of Database Management. vol. 5, no. 1, pp. 1- 35.

2. Alta Vista Support To: Bernard J Jansen Subject: Re: Two Questions

3. Anonymous, 1994. "The Age of the intelligent agent," Insurance Systems Bulletin, vol. 9, no. 10, pp. 4-5.

4. AT&T, 1996. AT&T Quarterly Shareowners Report for the Quarter Ended March 31, 1996. p. 5.

5. Jamshid Beheshti, 1992. "Browsing Through Public Access Catalogs," Information Technology & Libraries, vol. 11, no. 3, pp. 220-228.

6. Bruce Blumberg, 1994. "Action Selection in Hamsterdam: Lessons from Ethnology," In: Proceedings of the Third International Conference on the Simulation of Adaptive Behavior, Brighten, England, http://agents.www.media.mit.edu/groups/agents/papers.html

7. Bruce Blumberg and Galyean Tinsley, 1995. "Multi-Level Direction of Autonomous Creatures For Real-Time Virtual Environments," Computer Graphics Proceedings, SIGGRAPH-95, Los Angeles, California (August), http://agents.www.media.mit.edu/groups/agents/papers.html

8. Bruce Blumberg and Galyean Tinsley, 1995. "Do the Right Things...Oh Not That!" Workshop Notes of the AAAI '95 Spring Symposium on Interactive Story Systems, Stanford University, California (March), http://agents.www.media.mit.edu/groups/agents/papers.html

9. Bruce Blumberg and Galyean Tinsley, 1995. "Multi-Level Direction of Autonomous Creatures for Real-Time Virtual Environments," Computer Graphics Proceedings, SIGGRAPH-95, Los Angeles, California, (August), http://agents.www.media.mit.edu/groups/agents/papers.html

10. Chris Buckley, James Allan and Gerald Salton, 1995. "Automatic routing and retrieval using Smart: TREC-2," Information Processing & Management, vol. 31, no. 3, pp. 315-326.

11. Martin Cheek, 1994. "Agents come in from cold," Communications International (London), vol. 21, no. 8, pp. 23-26.

12. CIG http://dis.cs.umass.edu/research/searchbots.html

13. Scott Deerwester, Susan T. Dumais, George W Furnas, Thomas K. Landauer and Richard Harshman, 1990. "Indexing by Latent Semantic Analysis," Journal of the American Society for Information Science, vol. 41, no. 6, pp. 391-407. http://dx.doi.org/10.1002/(SICI)1097-4571(199009)41:6<391::AID-ASI1>3.0.CO;2-9

14. Lenny Foner. Paying Attention to What's Important: Using Focus of Attention to Improve Unsupervised Learning. MIT Media Laboratory Master's Thesis, http://agents.www.media.mit.edu/groups/agents/papers.html

15. Lenny Foner. What's an Agent, Anyway? A Sociological Case Study, http://agents.www.media.mit.edu/groups/agents/papers.html

16. Lenny Foner. Paying Attention to What's Important: Using Focus of Attention to Improve Unsupervised Learning, http://agents.www.media.mit.edu/groups/agents/papers.html

17. David Goldberg, David Nichols, Brian M. Oki and Douglas Terry, 1992. "Using Collaborative Filtering to Weave an Information Tapestry," Communications of the ACM, vol. 35, no. 12, pp. 61-70, http://agents.www.media.mit.edu/groups/agents/papers.html http://dx.doi.org/10.1145/138859.138867

18. Jeffrey Henning, 1994. "I-way needs service," Computerworld, vol. 28, no. 51, p. 41, (December 19).

19. Information Interchange Report. Intelligent agents and information retrieval, http://www.techapps.co.uk/iiartagt.html

20. IBM Corporation. Intelligent Agent Strategy, http://activist.gpl.ibm.com:81/WhitePaper/ptc2.htm

21. Michael P. Johnson, Pattie Maes, and Trevor Darrell, 1994. "Evolving Visual Routines," In: Proceedings of Artificial Life IV Conference, Cambridge, Massachusetts, http://agents.www.media.mit.edu/groups/agents/papers.html

22. A. Kay, 1984. "Computer software," Science American. vol. 251, no. 3, pp. 53-59. http://dx.doi.org/10.1038/scientificamerican0984-52

23. Michael Krantz, 1996. "Chiming in on Yahoo's roar," Mediaweek, vol. 6, no. 3, pp. 9-12 (January 15).

24. Yezdi Lashkari, Max Metral, and Pattie Maes, 1994. "Collaborative Interface Agents," In: Proceedings of AAAI '94 Conference, Seattle, Washington, (August), http://agents.www.media.mit.edu/groups/agents/papers.html

25. Henry Lieberman, 1993. "Mondrian, a Teachable Graphical Editor," In: Watch What I Do. Allen Cypher, editor, Cambridge, Mass: MIT Press, http://agents.www.media.mit.edu/groups/agents/papers.html

26. Henry Lieberman. Attaching Interface Agents to Applications. Unpublished draft, http://agents.www.media.mit.edu/groups/agents/papers.html

27. Pattie Maes, 1995. "Artificial life meets entertainment: Lifelike autonomous agents," Communications of the ACM, vol. 38, no. 11, pp. 108-114. http://dx.doi.org/10.1145/219717.219808

28. Pattie Maes, 1994. "Agents that reduce work and information overload," Communications of the ACM, vol. 37, no. 7, pp. 30-40. http://dx.doi.org/10.1145/176789.176792

29. Pattie Maes, 1995. "Intelligent Software," Scientific American, vol. 273, no. 3, pp. 84-86.

30. Pattie Maes, T. Darrell, B. Blumberg, and A. Pentland, 1996. "The ALIVE System: Wireless, Full-Body Interaction with Autonomous Agents," To be published in a Special Issue on Multimedia and Multisensory Virtual Worlds, ACM Multimedia Systems, ACM Press (Spring), http://agents.www.media.mit.edu/groups/agents/papers.html

31. Pattie Maes, 1994. "Modeling Adaptive Autonomous Agents," Artificial Life Journal, edited by C. Langton, vol. 1, nos. 1 & 2, http://agents.www.media.mit.edu/groups/agents/papers.html

32. Pattie Maes,. "How to Do the Right Thing," Connection Science Journal, vol. 1, no. 3, http://agents.www.media.mit.edu/groups/agents/papers.html

33. Tony Martin and Lisa Towell, 1993. The New Wave Agent Handbook. Reading, Mass.: Addison-Wesley.

34. Tom Mitchell, Rich Caruana, Dayne Freitag, John McDermott and David Zabowski, 1994. "Experience with a learning personal assistant," Communications of the ACM. vol. 37, no. 7, pp. 80-91. http://dx.doi.org/10.1145/176789.176798

35. Nicholas Negroponte, 1970. The Architecture Machine: Towards a More Human Environment. Cambrisge, Mass.: MIT Press.

36. Note: This situation only occurs in cases where one did not use the plus sign. The plus insists that each word be present. One of the words must also be unique than the rest.

37. Online Databases, 1994. Gale Directory of Databases, vol. 1, Detroit, Mich.: Gale Research, Inc.

38. Online Databases, 1996. Gale Directory of Databases, vol. 1, Detroit, Mich.: Gale Research, Inc.

39. Brad Rhodes and Pattie Maes, 1995. "The Stage as a Character: Automatic Creation of Acts of God for Dramatic Effect," Workshop Notes of the AAAI' 95 Spring Symposium on Interactive Story Systems: Plot and Character, Stanford University, (March), http://agents.www.media.mit.edu/groups/agents/papers.html

40. Brad Rhodes, 1995. Pronomes in Behavior Nets. Learning and Common Sense Section Technical Report # 95-01, MIT Media Laboratory, (November) http://agents.www.media.mit.edu/groups/agents/papers.html

41. Doug M. Riecken, 1994. "An architecture of integrated agents," Communications of the ACM, vol. 37, no. 7, pp. 106-116+.

42. George G. Robertson, Stuart K. Card and Jock D Mackinlay, 1993. "Information visualization using 3D interactive animation," Communications of the ACM, vol. 36, no. 4, pp. 56-71. http://dx.doi.org/10.1145/255950.153577

43. Marina Roesler and Donald T. Hawkins, 1994. "Intelligent agents," Online, vol. 18, no. 4, pp. 18-32.

44 Linda Rosen, 1993. "MIT Media Lab presents the interface agents symposium: Intelligent agents in your computer?" Information Today, vol. 10, no. 3, p. 10.

45. Gerald Salton, James Allan and Amit Singhal, 1996. "Automatic text decomposition and structuring," Information Processing & Management, vol. 32, no. 2, pp. 127-138. http://dx.doi.org/10.1016/S0306-4573(96)85001-1

46. Gerald Salton, James Allan and Chris Buckley, 1994. "'Automatic structuring and retrieval of large text files," Communications of the ACM, vol. 37, no. 2, pp. 97-108. http://dx.doi.org/10.1145/175235.175243

47. Gerald Salton, 1992. "The State of Retrieval System Evaluation," Information Processing & Management, vol. 28, no. 4, pp. 441-449. http://dx.doi.org/10.1016/0306-4573(92)90002-H

48. Gerald Salton and Chris Buckley, 1990. "Improving Retrieval Performance by Relevance Feedback," Journal of the American Society for Information Science, vol. 41, no. 4, pp. 288- 297. http://dx.doi.org/10.1002/(SICI)1097-4571(199006)41:4<288::AID-ASI8>3.0.CO;2-H

49. Gerald Salton, Chris Buckley and Maria Smith, 1990. "On the Application of Syntactic Methodologies in Automatic Text Analysis," Information Processing & Management, vol. 26, no. 1, pp. 73-92. http://dx.doi.org/10.1016/0306-4573(90)90010-Y

50. Gerald Salton and Chris Buckley, 1988. "Term-Weighting Approaches in Automatic Text Retrieval," Information Processing & Management, vol. 24, no. 5, pp. 513-523. http://dx.doi.org/10.1016/0306-4573(88)90021-0

51. Gerald Salton, 1987. "Historical Note: The Past Thirty Years in Information Retrieval," Journal of the American Society for Information Science, vol. 38, no. 5, pp. 375-380. http://dx.doi.org/10.1002/(SICI)1097-4571(198709)38:5<375::AID-ASI5>3.0.CO;2-3

52. Gerald Salton, 1985. "A Note About Information Science Research," Journal of the American Society for Information Science, vol. 36, no. 4, pp. 268-271. http://dx.doi.org/10.1002/asi.4630360407

53. Gerald Salton, Amit Sanghal, Chris Buckley, and Mandar Mitra, 1996. "Automatic Text Decomposition Using Text Segments and Text Themes," Hypertext 96, pp. 53-65.

54. J. Alfredo Sanchez, Flavio S. Azevedo and John J. Leggett, 1995. "PARAgente: Exploring the Issues in Agent-Based User Interfaces," In: Proceeding of the First international Conference on Multiagent Systems-ICMAS'95, pp. 320-327.

55. J. Alfred Sanchez, 1996. Agent Services. Ph.D. Dissertation. Department of Computer Science, Texas A&M University, College Station, Texas.

56. B. Shneiderman, 1988. "Direct manipulation: A step beyond programming languages," IEEE Computer, vol. 16, no. 8, pp. 57-69. http://dx.doi.org/10.1109/MC.1983.1654471

57. Beerud Sheth, 1994. "A Learning Approach to Personalized Information Filtering," Learning and Common Sense Section T. R. 94-01, MIT Media Laboratory, http://agents.www.media.mit.edu/groups/agents/papers.html

58. Edward J. Valauskas, 1994. "AppleSearch: How smart is Apple's intelligent agent?" Online, vol. 18, no. 4, pp. 52-64.

59. Steven D. Whitehead, 1995. "Auto-FAQ: An experiment in cyberspace leveraging," Computer Networks & ISDN Systems, vol. 28, nos. 1 & 2, pp. 137-146. . http://dx.doi.org/10.1016/0169-7552(95)00101-2

60 Martha M. Yee, 1991. "System Design and Cataloging Meet the User: User Interfaces to On-line Public Access Catalogs," Journal of the American Society for Information Science, vol. 42, no. 2, pp. 78-98. http://dx.doi.org/10.1002/(SICI)1097-4571(199103)42:2<78::AID-ASI2>3.0.CO;2-2

61. Jeffrey Young, 1994. "Data is cheap," Forbes, vol. 153, no. 8, p. 126.


Copyright © 1997, First Monday

Using an Intelligent Agent to Enhance Search Engine Performance by James Jansen
First Monday, volume 2, number 3 (March 1997),
URL: http://www.firstmonday.org/?journal=fm&page=article&op=view&path[]=517