Web–based software agents have been the primary source of e–commerce innovation and development in the past decade. Yet, the fact that the empirical design and evolution of these agents are not only determined by technology but also by their social aspects is probably not much appreciated. No agent lives in a historical vacuum. Instead, Web–based software agents operate in a social environment. It is through this environment that they interact with their users, compete with peers, bring profit to agent maintainers, and undergo regulation by public policies. In other words, Web–based software agents are also social agents.
Web–based software agents are characterized by a Web interface for user interaction and self–contained code to collect, synthesize, and create value–added information for human use. Those most widely used Web–based software agents are Google, eBay, and Shopping.com. They assist us in searching, participating in online auctions, and comparison shopping respectively.
Software agents were developed to assist us to perform a wide range of structured or semi–structured chores so we can save effort and focus on more challenging tasks (Maes, 1994). Later, more sophisticated agents were conceptualized for complex activities like searching, comparison, bidding, and negotiating with others (Kephart and Greenwald, 2000). Most agents were transformed to or emerged from the Web since 1994.
Previous literature indicated that the Web resembles an ecosystem, and many agents’ activities resemble those of biological organisms in the natural environment (Etzioni, 1997). Here we want to demonstrate that Web–based software agents are also social agents and their social aspects are reflected in at least three areas.
Web–based software agents interact and compete with their peers. To reduce the learning curve of existing users and best utilize the advantage of being a late comer, new agents usually emulate the interface and interaction pattern of existing agents. Thus, successful agents soon acquire their emulators. Popular agent features and data manipulation methods spread out fairly quickly, are improved, and then are learnt back; while less desirable features are revised or abandoned gradually. The agents in the same category tend to co–evolve through mutual learning and mutual emulation in a similar way as biological organisms do through the exchange of gene pools. This is the first social aspect of agent design.
Most agents are created for a dual purpose, to serve the users and to bring in profits for service providers. A good agent’s design should not only minimize the efforts of users but also utilize existing Web structures with built–in user efforts. For service providers, the bottom line for any agent’s design is the provider’s profitability. Like the invisible hand described by Adam Smith 200 years ago, the behavior of software agents and their evolution path are influenced by the invisible hand of the Web market. This becomes the second social aspect.
Finally, legal and public policy regulations influence all agent activities directly and indirectly. It often happens that legal risk from local and federal legislative bodies could suffocate the growth of a particular class of software agents while bolstering the growth of others, which happens from time to time. This becomes the third social aspect of agents that cannot be ignored.
As already pointed out by many computer and information science researchers, technology innovation alone won’t warrant a successful adoption of software agents by the public. Successful software agents now thriving on the Web are always capable environment creators, utilizers, or adapters. If we could consider the social aspects of agents and blend the social elements into their design, such agents’ chance of success will increase.
Next, for each aspect, we have further explanations and provide examples to illustrate the influence. Suggestions to agent designers are provided.
Emulation has been one of the dominant themes of the computer industry since its nascent age. Whether it is hardware or software, popular features have been quickly learnt by peers and replicated in peers’ own systems, even with the risk of lawsuits. Actually the brief history of personal computing has been ridden with lawsuits about copyrighted design or “look and feel.” Sometimes, the early emulator becomes the “victim” of a later emulator. The now dominant Windows operating system GUI interface was claimed to have been an emulation of Apple’s Macintosh operating system while the Macintosh’s was based the graphical user interface of the Xerox Alto developed at Xerox PARC.
On the Web, both the emulation and assimilation of popular designs among software agents occurs rapidly. For example, before 2000, comparison shopping agents, or shopbots, did not have as a feature of automatic calculation of the total cost of a product, including shipping and taxes. However, since 2000, when one shopbot began to provide these features, other shopbots adopted quickly in order to remain in competition.
Since no agent is deployed in a vacuum, any attempt at real world implementation has to consider the expectations of users. To illustrate, we can put ourselves in the shoes of a techno–entrepreneur exploring commercial opportunities that a software agent could provide. The first priority for this initiative is making a shopbot that not only works but also brings in profit by bringing in large numbers of users and, meanwhile, accommodating them with the lowest learning curve. How can you achieve this? The emulation of existing successful shopbots would be a preferred choice — first of the interface and then of the way those successful shopbots utilize technology and a given business model.
For most Web–based agents, a given business model is intertwined with their technical design, which explains why shopbots look so similar in many aspects even though many of them focus on completely different product categories.
Technical innovations occur but cannot be planned. Shopbot A’s innovation will be assimilated quickly by shopbots B to Z, and, soon, A will find it has to learn and emulate innovations by others with equal speed. It is simply impossible for an agent to focus on its own evolutionary path without being challenged by its peers’ innovation, which could possibly change the former’s course slightly or force it take a completely new direction.
When certain combinations of features become established and adopted by most agents, they become a de facto standard, and more radical innovations are less likely to succeed. One recent case is the interface change of Become.com. Become.com is the second shopbots created by co–founders of mySimon.com. When it was launched in 2005 (Figure 1(a)), it adopted a rather innovative shopbot interface, a Spartan–styled single input box plus a list of the most recently emerged keywords. At that time, though, listing major product categories on the front page was a feature of most shopbots. Such innovative interface design risked being thwarted and forced to conform to the “look and feel” of existing shopbots in order to be familiarized and used sooner. As a result, during the next two years, the interface of become.com (Figure 1(b)) was changed gradually and eventually looked almost identical in layout of a popular shopbot, shopping.com (Figure 1(c)).
a b c Figure 1: a) Become.com in 2005–06; (b) Become.com in 2007; (c) Shopping.com later in 2007.
Depending on the circumstances, such conformation to dominant style could be either a shrewd strategic move or a painful adaptation to retain online shoppers who are already accustomed to traditional shopbot interfaces. In whichever case, a Google–like innovation that could have potentially changed shopping behavior from category–based to search–based was disrupted.
Almost all successful Web–based agents are good at utilizing human effort and motivation in an adaptive way.
The fact that when we type the word “Apple” into Google, the first response we get is not the fruit we eat but Apple the company has profound implications to every agent designer. By using the popularity vote of hyperlinks, Google can deliver the most popular instead of the most rational result. And, not accidentally, that is exactly what we expect from a search engine agent.
Hyperlinks are representatives of the human efforts accumulated since the birth of the World Wide Web. They can be rational or irrational but they reflect our most up–to–date preferences. Google’s PageRank algorithm is a clever leverage of this vast yet largely untapped human effort gold mine. By calculating the overall popularity of each and every Web page, Google avoids the folly of all previous search engines that try to match human intelligence instead of leveraging existing efforts.
Another clever leverage of human efforts is the collaborative filtering technology being implemented in recommendation agents on popular Web portals like Amazon and projects such as MovieLens (http://www.movielens.org/login) created by GroupLens Research at the University of Minnesota.
These sorts of agents use existing human preference information to recommend new products. Like the Google search engine, they do not try to make sense out of such preferences but merely use them.
Agent design could also leverage human effort on demand. The Mechanical Turk program (https://www.mturk.com/mturk/welcome) by Amazon employed thousands of online users to conduct picture tagging and other human intelligence tasks (HITs) that cannot be done by artificial intelligence. Amazon then integrates these HITs into Web services and makes them transparent to service requesters.
Motivation could also be integrated into agent design to achieve ingenious solutions. When the first well–recognized Shopbot, BargainFinder, was launched, it used data wrapping technology. Essentially, the agent crawled several vendors’ Web sites and extracted price information. The inaccuracy of such techniques was obvious, and it became a major challenge for later optimization efforts. However, it didn’t take long for comparison–shopping service providers to find out that small online vendors are motivated by the incentive to increase sales by having shopbots list their products. Thus, a change of agent design, from extracting information from online vendors to having vendors themselves feed data into shopbots, not only resolved the problem of data accuracy but also increased choices for online shoppers. Soon shopbot–enabled comparison shopping became the third most popular B2C ecommerce model right after online retailing and online auctions.
The fundamental principle in utilizing human efforts might be best described as the least effort principle.
The least effort principle was first observed and named by late Harvard linguist George Zipf (1949) in his study on human languages. He found that, when we speak, we always minimize the number of words we utter to the extent that the listener can understand our meaning only by considering the context. This, according to him, is our evolutionary tendency for conserving energy or using the least effort in communication.
This principle was observed in almost all aspects of human activities. In particular, library science researchers found that when we search information, we tend to use the most convenient search method in the least exacting mode available. And we will stop searching the moment we get a minimally acceptable result. In the domain of information system research, such behavior was further confirmed. Researchers in a series of experiments found that when the subjects were provided with different decision support functionalities, they always chose to use the function that cost least effort to get the result. They would not use the more effortful function, though using the latter could get better results (Todd and Izak, 1994). Perhaps the most insightful explanation and updates on the least effort principle in decision–making research are by the Nobel laureate, Herbert Simon (1955). In his ground–breaking work on the bounded rationality of human beings, he called it satisficing decision–making in contrast to optimizing decision–making as idealized in classic decision–making theory.
The least effort principle gives us a relatively universal principle to select different design options, and it is in line with the simplicity principle in interface design as championed by Jef Raskin (2000).
Court rulings have significant impact on agent design, and it is especially true in the nascent stage of an agent. Thus, it is important for agent designers to consider the potential legal implications of their design.
Web–based agents are information brokers. Any new brokerage model could potentially change the equilibrium and re–adjust the interests of existing stakeholders. This potential threat to the interests of stakeholders could, in many cases, lead to lawsuits. Once a court ruling is issued, the favored agent design would likely become prosperous while those less favored designs would lose momentum. Because of the butterfly effect, even a slight legal impact in the early stage of agent development could significantly affect later evolution and future market structure.
The development path of P2P technology is a prominent case. In the late 1990s, online music sharing became much easier with the emergence of Napster, the first generation P2P agent that provided an indexing service for MP3 songs distributed on participants’ storage devices. Its popularity posed a direct threat to the music recording industry. Thus, the Recording Industry Association of America (RIAA) launched a successful lawsuit in 1999 against this service.
Because of the ruling, the next generation P2P searching agents, like Gnutella, abandoned the central index server design and used a distributed hash table, which allowed them to avoid being targeted by RIAA and others on piracy charges.
A more complex case is the different development paths of shopbots and metabots because of several court rulings, such as eBay, Inc. v. Bidder’s Edge, Inc. (2000) and mySimon v. Priceman (1999).
In both lawsuits, a metabot tried to extract data from a shopbot while the shopbot refused to share the data. In the eBay, Inc. v. Bidder’s Edge, Inc. case, Bidder’s Edge was a metabot that collected price information from multiple auction sites, including eBay. The popularity of Bidder’s Edge made eBay uncomfortable and caused eBay to consider Bidder’s Edge a threat to its own business model. Eventually, a prolonged lawsuit sent Bidder’s Edge out of business.
The case between MySimon and Priceman is more straightforward. According to mySimon, Priceman, a then Houston–based metabot that searched other shopbots, especially MySimon, retrieved a large amount of product information from the mySimon site without permission. Priceman ceased operation when MySimon launched the lawsuit and eventually closed its operation in 2000.
In both cases, the metabots were severely affected by copyright regulations. However, shopbots avoided such charges by claiming the originality of their data even though both agents extracted data from other sites. As a result, the popularity of metabots has been very limited in online retailing compared to shopbots.
Sociologists, such as DiMaggio and Powell (1983), found that organizations in similar social and economic environments share a high similarity in their processes, structures, and other aspects due to imitation or independent development under similar constraints. This phenomenon is called institutional isomorphism.
The social aspect of agent design could probably be more systematically studied based on theories from sociology because Web–based software agents are all “living” in a Web environment, sharing similar profit–making goals, and serving the same customers, adapting to similar pressures from human behavior. Meanwhile, like organizations, though an agent is rationally designed, it by no means develops along a single and isolated track. Instead, its development is always a mixture of collective decisions constrained by peers and environment.
Thus, agent isomorphism could be further explored, providing a fresh perspective for designing better agents. Table 1 lists a summary of these social influences on agents.
Table 1: Summary of social aspects in agent design. Social aspects Influence Frame of reference Implications to agent design Peer influences Conformity in structure and interface Institutionalization Use iterated approach in agent design; Make overall agent structure flexible for change Human efforts and motivations Users Effort minimizations Least effort principle Tap into the existing efforts of the Web; Integrate user collaborative mechanism and side contribution in design Service providers Profit maximizations Cost benefit analysis Profit conscious design; Use Loosely coupled design to allow human computation integration Legal impacts Regulations Laws and court rulings Lawsuit–proof design
Web–based software agents will become increasingly important in our daily life. Currently, there are many speculations and trials on semantic Web technologies like the semantic shopbots mentioned in Internet Computing (Fasli, 2006). Needless to say, a Web–based software agent in the semantic Web could perform wonderful tasks that we currently may hardly imagine (Berners–Lee, et al., 2001). However, concerns such as those Hendler (2007) raised in a recent issue of Intelligent Systems demonstrate the chasm between expectations and reality. It is path–dependence not meticulous planning that determines the future of evolving technology. Ad hoc designs like the QWERT keyboard are still being used today simply because the cost of replacement is too large and the inconvenience it brings to individual users is comparably bearable. Only by considering and integrating their social aspects will we be able to develop truly useful Web–based software agents.
About the author
Yun Wan is an assistant professor at the University of Houston — Victoria. His research mainly focuses on Web–based software agents in B2C electronic commerce. He has published in the Communications of the ACM, IEEE Internet Applications, etc. His edited book Comparison–shopping services and agent designs was published in 2009.
E–mail: wany [at] uhv [dot] edu
T. Berners–Lee, J. Hendler and O. Lassila, 2001. “The semantic Web,” Scientific American, volume 284, number 5, pp. 34–44.
P.J. DiMaggio and W.W. Powell, 1983. “The iron cage revisited: Institutional isomorphism and collective rationality in organizational fields,” American Sociological Review, volume 48, number 2, pp. 147–160.
O. Etzioni, 1997. “Moving up the information food chain: Deploying softbots on the World Wide Web,” AI Magazine, volume 18, number 2, pp. 11–18.
M. Fasli, 2006. “Shopbots: A syntactic present, a semantic future,” IEEE Internet Computing, volume 10, number 6, pp. 69–75.
J. Hendler, 2007. “Where are all the intelligent agents?,” IEEE Intelligent Systems, volume 22, number 3, pp. 2–3.
J.O. Kephart and A.R. Greenwald, 2000. “When bots collide,” Harvard Business Review (1 July), pp. 17–18.
P. Maes, 1994. “Agents that reduce work and information overload,” Communications of the ACM, volume 37, number 7, pp. 30–40.
J. Raskin, 2000. The humane interface: New directions for designing interactive systems. Reading, Mass.: Addison–Wesley.
H.A. Simon, 1955. “A behavioral model of rational choice,” Quarterly Journal of Economics, volume 69, number 1, pp. 99–118.
P.A. Todd and B. Izak, 1994. “The influence of decision aids on choice strategies under conditions of high cognitive load,” IEEE Transactions on Systems, Man, and Cybernetics, volume 24, number 4, pp. 537–547.
G.K. Zipf, 1949. Human behavior and the principle of least effort: An introduction to human ecology. Cambridge, Mass.: Addison–Wesley Press.
Paper received 22 December 2008; accepted 9 June 2009.
This work is licensed under a Creative Commons Attribution–Noncommercial–No Derivative Works 3.0 Unported License.
Social aspects of agent design
by Yun Wan.
First Monday, Volume 14, Number 7 - 6 July 2009
A Great Cities Initiative of the University of Illinois at Chicago University Library.
© First Monday, 1995-2014.