Many forms of individual cognition are enhanced by communication and collaboration with other intelligent agents. We propose to call this collective cognition, by analogy with the well known concept of collective action. People (and other intelligent agents) often ``think better'' in groups and sometimes think in ways which would be simply impossible for isolated individuals. Perhaps the most spectacular and important instance of collective cognition is modern science. An array of formal organizations and informal social institutions also can be considered means of collective cognition. For instance, Hayek famously argued that competitive markets effectively calculate an adaptive allocation of resources that could not be calculated by any individual market-participant. Hitherto the study of collective cognition has been qualitative, philosophical, even at times anecdotal. Only recently, we believe, have the tools fallen into place to initiate a rigorous, quantitative science of collective cognition. Moreover, it appears that soon there will be a real practical need for such a science.
Collective cognition involves an interaction among three elements-the individual abilities of the agents, their shared knowledge, and their communication structure. Cognitive collectives therefore resemble many other complex systems which are collectives of goal-directed processes. Typically, the individual processes know little of the detailed dynamics and the state of the overall system and, therefore, must use adaptive techniques to achieve their goals. There are many naturally occurring examples, including human economies, human organizations, ecosystems, and even spin glasses. In addition, it has recently become clear that many of the engineered systems of the future must be of this type, with massively distributed computational elements. There is optimism in the multi-agent system (MAS) field that widely applicable solutions to large, distributed problems are close at hand. Some experts now believe that, in the information and telecommunications networks of today, we have nascent examples of artificial cognitive collectives.
Much scientific work has been done on the ``forward problem'' of deducing the global behavior that ensues for various choices of the design parameters for collectives. In the case of collective cognition, the forward problem concerns the evolution over time of the shared knowledge of the agents. This will be a major theme of the conference; we wish, for instance, to know just how analogous to organic evolution this knowledge evolution really is. The other theme of the conference, however, is the less well-understood inverse problem: starting with a desired global behavior for a collective, design the goals of constituent agents so as to produce that behavior. Much of what is known about this design problem concerns collectives of human beings, e.g., mechanism design in economics. While a crucial starting point, such work is limited by the peculiarities of human psychology, peculiarities which need not be shared by other, perhaps simpler and less intelligent, agents, or simply ones with radically different degrees of freedom.
While it is sometimes possible to hand-craft very small collectives, what is needed is a full-fledged science of collective design. We need to know how to make a large collective work towards a specific goal (e.g., minimize packet throughput in a router, win a soccer game) in a decentralized, adaptive way. Only with a such methods can MAS-as the engineering discipline it claims to be-fulfill its promise and meet the very real technical needs of the near future.
Two key aspects of any solution to that design problem will be specifically addressed at this workshop.
The first property is crucial to modular design for large or inherently distributed problems (e.g., Internet routing, planetary exploration rovers, and satellite constellations). Without it, either each agent will attempt to solve the full problem (which won't succeed in any interesting case) or agents will frustrate one another, preventing the system from finding solutions.
The importance of the second property lies in how the agents interact with one another and the environment. Both the environment and the response of other agents to changes in that environment will modify the ``background'' state one agent perceives before choosing its actions. Thus, it is imperative that the agents adapt. Otherwise, the only way to get satisfactory results is by hand-crafting them in detail, which is simply not an option for large-scale, complex problems. The interaction structure among agents should also adapt, so agents can better exploit opportunities in a changing environment, by (say) forming and dissolving teams.
Ad hoc, empirical approaches to the design of collective cognition will not get us very far and will leave us at the mercy of poorly-understood systems and a raft of anecdotally interpreted simulations. Only a rigorous, quantitative science of collective cognition can provide us with broadly applicable methods of engineering the massive computational systems of the future. Just as important, such a science would provide new tools for investigating naturally occurring collectives, including the ones in which we ourselves participate.
There are many fields that have addressed aspects of collective cognition, from decentralized control theory to economics and game theory to social psychology. However, there are major differences in both the approach such fields take and the set of assumptions that form the basis of those fields. For example, the components of a multi-agent system may have many degrees of freedom that human beings lack, and lack many that human beings possess. Since MAS designers can, to a large extent, chose what degrees of freedom to give their agents, they have more flexibility in choosing policies for agent-agent interactions than (say) economists doing mechanism design. Furthermore, while game theory has established a strong theoretical basis, across several disciplines, for analyzing the equilibrium behavior of systems and how various equilibrium states relate to one another, there is little work on far-from-equilibrium behaviors and their robustness to perturbations.
Therefore, neither the direct application nor the simple extension of principles borrowed from existing fields are likely to provide the theoretical tools needed to understand and design the emergence of collective cognition in multi-agent systems. In this workshop we will address the design of systems that are intended to solve large distributed computational problems with little or no hand-tailoring through the collective and adaptive behavior of the agents comprising that system.
This workshop is envisioned as being a first meeting on the science of cognitive-collective design. As such, the issues which the workshop will raise are of interest to a surprisingly diverse array of specialties. In no particular order, the following come to mind:
At the broadest level, we hope to collect results, and spur work on, the following broad theoretical issues.
To address these issues, and stimulate discussion on related issues, we will encourage participants in the following areas: