Titles and abstracts of presentations are due by 21 December 2001.
Certain usage of the term "emergence" in the multi-agent systems (MAS) and
complex adaptive systems (CAS) literatures is formalized. Specifically, this
talk will focus on so-called "reactive" MAS, where agents do a limited amount of
planning (are myopic) while groping for utility improvements through "best
reply" type behavior. Such agents are commonly employed in evolutionary game
theory and adaptive agent economics. Descriptions of behaviors in such systems
are often given at multiple levels: at the level of the individual components of
the system (agent or micro-level) as well as at some higher level involving many
system components (social or macro-level). Emergent properties are ones that
obtain at one level while being absent at the other. A formal way to relate
multi-level descriptions of a phenomenon is given by the theory of model
aggregation, in which either the micro or macroscopic description is given and
one synthesizes the other representation with some metric. A typical result for
the "forward problem"--i.e., creating an aggregate model from an exact
microscopic one--is that the aggregate model either fails to exist or is not
unique. The concept of an "emergent" property of a system falls out naturally
from considerations of model aggregation, and is distinguished from simple
"resultant" properties. Two distinct types of emergent properties are
identified. Examples of emergent properties in human social systems are given.
In particular, it is argued that certain social structures (e.g., social norms,
markets, firms, institutions) are best understood as emergent. When alternative
social structures can be welfare (Pareto) ranked then we are in a position to
quantify the collective intelligence of social systems.A Positive Theory of Emergence for Reactive Multi-Agent Systems
Rob Axtell
Back to Titles.
View html version or
Download Powerpoint version (2.9MB)
Learning is difficult, mostly about things we don't know much about.
In this talk I will dwell, from my biology viewpoint, into the main
reasons to suspect we don't quite know the fundamentals about how
collectives of neurons learn, remember and forget.
Back to Titles.
An overview of open questions and goals for the workshop week.
View pdf version (340K)
Complex organisms, such as ecological systems or human
civilization, have a hierarchy of organizational levels, with units on
each level interacting to form larger units. Such a structure results
naturally if one assumes that individuals can communicate and
specialize, thereby obtaining an increase in productivity. The cost of
communication limits the size of a unit, and favors communication
between units, and so on, resulting in the mentioned hierarchical
structure. My talk presents a simple model for the formation of such
an organism. Using simple mathematical expressions for the
productivity and communication cost as function of group size, and
optimizing the net productivity, one obtains a hierarchical structure
with high productivity. This result holds even when the optimization is
not done globally, but when individuals or other units follow only a
local optimiziation rule.
A computational ecosystem is a form of distributed computation whose agents
have incomplete and imperfect information on the state of the system.
Examples include interacting programs distributed on a network and teams of
robots intereacting with the physical world. I'll describe a model for such
systems and an instantiation based on market mechanisms. When agents choose
among several resources, the system dynamics can be oscillatory or chaotic.
As an application of this model, I'll describe a simple, local control
mechanism for achieving global stability in an otherwise chaotic system.
The proposition that data-rich real-world systems such as financial markets
provide a novel laboratory for studying collective phenomena in complex
systems, has generated much excitement recently. Of particular interest in
such systems is the occurrence of large, endogenous changes or 'extreme
events' Ð the so-called drawups, drawdowns, corrections and crashes Ð which
arise more frequently than would be expected from a random-walk model of the
underlying price dynamics.
Here we attempt to examine the dynamics associated with such large changes.
In particular we examine the microscopic and macroscopic features associated
with extreme events arising in artificial multi-agent markets [1]. The
extreme events comprise large changes in the price (and/or volume) and mimic
the drawups, drawdowns and crashes observed in the finance world. The
artificial markets considered are based on variants of multi-agent games
considered before in the literature [2][3] in which the number of traders,
and the strategies which they use, evolve in time.
First we focus on extreme events which are endogenous (i.e.
internally-produced by the market participants themselves). Particular
attention is paid to the extent to which the observed crashes are universal,
or whether there is in fact a taxonomy of such crashes. In addition, we
examine the microscopic effects related to crowd/anticrowd dynamics, wealth
distributions, and ?market confidenceÕ leading up to the crashes. The
possible existence of 'crash precursors' is discussed.
We also explore the effect of revisiting a given crash under a variety of
different market conditions. In particular we are interested in the extent
to which the crash could have been avoided at the time by using some form of
macroscopic or microscopic controls, or whether "history always repeats
itself." The effects of various regulatory actions are explored and their
implications discussed.
Finally the effectiveness of regulatory measures in controlling the
after-shocks of an exogenous extreme event (i.e. an externally-produced
global shock to the markets) is analyzed within the same framework.
[1] D. Lamper, S. Howison and N.F. Johnson, Phys. Rev. Lett. 88, 017902
(2002).
[2] P. Jefferies, M.L. Hart, P.M. Hui, N. F. Johnson, Eur. J. Phys. B 20,
493 (2001); N.F. Johnson, M. Hart, P.M. Hui and D. Zheng, Int. J. of Theor.
and Appl. Fin. 3, 443 (2000).
[3] N.F. Johnson, P.M. Hui, R. Jonson and T.S. Lo, Phys. Rev. Lett. 82,
3360 (1999).
The classic distributed computation is done by atoms,
molecules, or spins in vast numbers, each equipped
with nothing more than knowlege of their immediate
neighborhood and the rules of statistical mechanics.
Such agents, 10^23 or more of them, are able to form
liquids and solids from gases, realize extremely complex
ordered states, such as liquid crystals, and even
decode encrypted messages. I'll describe a study done
for a sensor-array "challenge problem" in which we
have based our approach on old-fashioned simulated
annealing to accomplish target acquisition and tracking
We believe the many additional constraints that occur
in the real problem can be folded, step by step, into
this stochastic approach. The results have obvious
applicability to cellular phone handoff as well.
This paper describes some examples of collective behavior in the field of aeronautics.
Applications include large scale, multidisciplinary team design of aircraft systems using
a multi-level optimization strategy termed collaborative optimization, and the coordination
of multiple interacting flight vehicles with a common objective. The engineering design
of complex systems involving many individuals or organizations and the formation flight of geese
share many similar features. In each case, individuals must decide on a course of action
that must benefit the system as a whole, despite the requirement that they act locally and cannot
immediately ascertain the effect of their actions on the entire system. We demonstrate how
multi-level distributed optimization can be used to achieve optimal system performance while
focusing on local degrees of freedom and how a similar approach leads to the optimal V-shaped
flock of geese, even when individual birds seek only to maximize their own local objective.
The "Collective Intelligence" (COIN) framework concerns the design
of collectives of agents so that as those agents strive to maximize
their individual utility functions, their interaction causes a
provided "world" utility function concerning the entire collective
to be also maximized. Here we show how to extend that framework to
scenarios having Markovian dynamics when no re-evolution of the system
from counter-factual initial conditions (often an expensive
calculation) is permitted. Our approach transforms the
(time-extended) argument of each agent's utility function before
evaluating that function. This transformation has benefits in
scenarios not involving Markovian dynamics, in particular scenarios
where not all of the arguments of an agent's utility function are
observable. We investigate this transformation in simulations
involving both linear and quadratic (nonlinear) dynamics. In addition,
we find that a certain subset of these transformations, which result
in utilities that have low "opacity" (analogous to having high signal
to noise) but are not "factored" (analogous to not being incentive
compatible), reliably improve performance over that arising with
factored utilities. We also present a Taylor Series method for the
fully general nonlinear case.
Muti-agent systems using distributed control have several advantages
over systems using alternative designs: they are robust, flexible,
scalable, adaptable and amenable to mathematical analysis. We examine
two distinct distributed control mechanisms for multi-agent systems:
In the first part of the talk we analyze a multi-agent system using
biologically-inspired control in which local interactions among many
simple agents leads to a desirable collective behavior. Our motivating
example is collaboration in a group of simple reactive robots studied
experimentally by Ijspeert et al.* In these experiments robots could
complete their tasks only by collaborating with other robots, and this
collaboration was achieved without explicit communication or
coordination between robots. We use rate equations to describe
collaboration. The rate equation approach is a general tool to study the
dynamics of collective behavior in a system of simple Markov-type
agents, in which the agent's future state depends only on the present
state and none of the past states.
In the second part of the talk, we consider a network of boolean agents
competing for a limited resource whose capacity varies with time. Each
agent has a choice of two strategies, each resulting in a decision to
use the resource or not. The winning strategies, that result in the
agent being in the minority group are rewarded, and the loosing ones are
punished. We study numerically the properties of such a system and
show that for some values of network connectivity the system becomes
extremely efficient and is able to adapt quickly to relatively large
variations in the capacity level.
* A.J. Ijspeert, A. Martinoli, A. Billard, and L. M. Gambardella.
Autonomous Robots, 11(2):149--171, 2001.
Transportation systems can be seen as displaying meta-intelligence, in
the sense that intelligent actors (travelers) conspire to make the
system function as a whole. In simulations one can model this by
resolving each traveler individually, and giving each traveler rules
according to which she/he generates goals and then attempts to achieve
them. The system as a whole has no goal of its own except to
"function," i.e. to be a metropolitan region where people survive
and stay.
This talk will approach this question from an extremely pragmatic
point -- how can a large scale simulation of such a system be
implemented on a parallel computer? In particular, we concentrate on
Beowulf clusters, which are clusters of regular PCs connected by
regular relatively slow local area network. Our working hypothesis is
that the imbalance of the computational system --fast CPUs, slow
communication-- resembles the real system, and that a good simulation
of the real system will take advantage of these parallels. The
presentation will both focus on what is actually implemented and
working, and on future plans.
Dispersion games are the generalization of the anti-coordination game to an
arbitrary set of agents and actions. The agents receive their maximal
reward when they choose as dispersed a set of actions as possible. This
class of games models a large number of natural problems including such
examples as division of roles within a team, load balancing in networks, and
niche selection in economic domains. Our work consists of two main
contributions. First, we formally define and characterize several
interesting classes of these games. Second, we consider what strategies a
set of autonomous agents could employ to optimize their performance in a
dispersion game without a central authority or explicit communication. We
analyze the performance and information requirements for strategies drawn
from game theory, machine learning, and special-purpose algorithms.
Joint work with Jim Crutchfield, Kristina Shalizi, and Marcelo Camperi.
Autonomous agents are artificial intelligence programs that repeatedly
sense their environment, decide what actions to take so as to satisfy
their goals, and execute those actions. When situated in dynamic,
multiagent environments, machine learning techniques can be useful for
adjusting agents' policies so that they interact favorably with those
of the other agents. In this talk, I will present two examples of
agents learning in such challenging environments. First, "ATTac" is
an autonomous bidding agent that uses boosting to learn to predict
closing prices of goods in ongoing auctions among several adversarial
bidders. ATTac was the top-scoring agent in the recent Trading Agent
Competition. Second, robotic soccer agents use reinforcement learning
to successfully learn a collaborative "keepaway" task in the RoboCup
soccer simulator. They achieve significantly better performance than
a reasonable hand-coded policy on the same task.
Groups and individuals in modern societies are characterized in their
actions by a drive to extremize certain utility functions, such as to
maximize wealth, to minimize risk, maximize safety, etc. This multi-player
"game" naturally leads to a highly constrained, highly dynamic and highly
frustrated system, since the players' actions become strongly correlated
over time. In such a dynamic setting, information can have a global currency
value if it constitutes knowledge, which reduces the uncertainty of the next
action for the majority of the players if they would possess this
information. For example, a major decision or a major change in the
structure of a large corporation is such information. Those privileged to
get this information first, will make the most gains of it. However, once
this information has spread among sufficiently many players it relaxes to a
valueless state. Using the minority game as a simplified market model, I
study how information of global currency value spreads and diminishes in a
group of N players who form a social network with various random topologies.
In particular I study the effects of random graph, small world and
scale-free graph topologies for information spreading.
It is a commonplace observation that markets can (in some
circumstances) compute effective resource allocations in a
decentralized manner, despite highly fragmented information and
sparse communication. We have attempted to harness this power by
application of computational markets to various domains, finding that
this method can be quite successful in convex atomistic environments
for which markets are known to work well. Designing markets for more
general environments presents challenging issues, in the process
raising the interesting question of just what markets can compute.
In development of market protocols for decentralized formation of
supply chains, we identify a sense in which markets can decentralize
arbitrary NP problems, and introduce MarketSAT, a highly distributed
protocol for propositional satisfiability.Collective Cognition Workshop Welcome: Focus and Goals
Viewhtml version or
Download Powerpoint version (108K)
Jim Crutchfield
Back to Titles.
A Simple Model for the Formation of a Complex Organism
Barbara Drossel
Back to Titles.
The Behavior of Computational Ecologies
Tadd Hogg
Back to Titles.
Anatomy of Extreme Events in a Complex Adaptive System Comprising Competing Agents
Neil Johnson
Back to Titles.
How Smart Does an Agent Need to Be?
Scott Kirkpatrick
Back to Titles.
Aeronautical Applications of Collaborative Optimization and Collective Design
Ilan Kroo
Back to Titles.
Designing Agent Collectives For Systems With Markovian Dynamics
John Lawson
Back to Titles.
Three Agents are Better than One: Distributed Control in Multi-Agent Systems
Kristina Lerman
Back to Titles.
Distributed Intelligence in Large Scale Traffic Simulations on
Parallel Computers
Kai Nagel
Back to Titles.
Dispersion Games: General Definitions and Some Specific Learning Results
Rob Powers
Back to Titles.
Causal Architecture of Collectives
Download postscript version
Cosma Shalizi
Back to Titles.
Autonomous Learning Agents in Dynamic, Multiagent Environments:
Bidding in Auctions and Playing Soccer
Peter Stone
Back to Titles.
Spreading of Information with Global Currency Value in Highly Networked Societies
Zoltan Toroczkai
Back to Titles.
Collective Cognition in Market Protocols
Michael Wellman
Back to Titles.