geosimulation :: innovative geospatial simulation and analysis but innovative people

Home | Book | Research | Publications | Bio | Press | Geosimulation Labs
Dr. Paul M. Torrens, Center for Urban Science + Progress, New York University

Background material on the following topics is available on this site:

Cities are excellent examples of complex adaptive systems and display all of the signature characteristics of complex systems.

Complexity and emergence
The idea of complexity hinges on the notion of emergence. In emergent systems, a small number of rules or laws, applied at a local level and among many objects or agents, are capable of generating surprising complexity in aggregate form. These patterns manifest themselves in such a way that the actions of the parts do not simply sum to the activity of the whole. Essentially, this means that there is more going on in the dynamics of the system than simply aggregating little pieces into larger units. Although often ordered in their structure, the complex systems that are generated are not always just random or chaotic; recognizable and ordered features can emerge. Additionally, these systems are dynamic and change over time and the dynamics often operate without the direction of a centralized executive. Examples of emergent systems abound. For example, the liquidity of water is more than a simple extrapolation of characteristics that can be attributed to individual water molecules, which have no liquid quality of their own. Similarly, in economics, the activity of individual market participants, trading without centralized control, often leads to aggregate outcomes that are relatively efficient, as efficient as if they were controlled.

Artificial Life
The field of artificial life (or A-life) is concerned with human-made systems that exhibit life-like characteristics and behaviors. As well as exploring the possibility of creating artificial life, the field is focused on understanding natural life by attempting to abstract the fundamental dynamical principles that underpin biological phenomena-in short, this is a search for the rules that make life possible. A-life has close parallels with complexity studies. Life and living organisms represent some of the best examples of complex adaptive systems. Very simple genetic rules applied across many cells spawn advanced biological structures and organisms. The quest for artificial life is being pursued by replicating the complex dynamics of living systems in other physical media that make them accessible to new kinds of experimentation and testing. In this sense, it is hoped that some of the principles that govern real life can be uncovered in the process. An obvious candidate for such a laboratory is the computer.

Reductionism versus synthetics
This detailed, bottom up approach to complexity is, in some senses, a relatively new way of approaching scientific inquiry. Much research in the social sciences, and particularly in geography, is challenged by a dichotomy between the individual (the household, a person, and independent objects) and the aggregate (populations, collectives, and regions). In a spatial sense, researchers have been confronted with the dilemma of reconciling patterns and processes that operate and manifest at local scales with those at larger scales. This can be considered as a problem of ecological fallacy.


An ecological fallacy occurs when it is inferred that results based on aggregate data can be applied to the individuals who form the aggregated group. A related problem in geography is the Modifiable Areal Unit Problem. Of course, there are many examples in which aggregate forms may be extrapolated from the individual. However, reconciling the two often poses a challenge, particularly when processes that operate at the local level are interdependent, i.e., the actions of one individual depend on the actions of another individual. In these cases, an understanding of the processes that generate macro-scale patterns may not be easily gleaned by simply aggregating up from the individual; what is needed instead is an understanding of the interactive dynamics that link local-scale and larger-scale phenomena. This is an argument of reductionism versus synthetics.


The reductionist approach analyzes problems by breaking them down to their constituent components, reducing them to manageable pieces and gaining an understanding of them in the process. In some cases this approach works quite well, and for many phenomena the technique is wholly appropriate: particularly in situations where the whole is the sum of many small parts. However, the reductionist approach is flawed in the respect that it may miss the emergent properties of a system: those that come as a by-product of the interactive dynamics of individual elements. In many instances, a synthetic approach may be more appropriate.


In the context of this discussion, the synthetic or generative approach involves studying phenomena by experimenting with simple rules for behavior and allowing constituent components to interact, dynamically, until macro-scale phenomena emerge--a piecing together rather than a dissection. This is what happens in our own bodies. The rules encoded in our DNA specify a set of behaviors for the development of our biology over time. The products of that interactive development on a genetic level are macro-scale structures-organs, systems, and traits-that bare little resemblance to the original components of our DNA. The central nervous system, for example, is significantly more complicated than the arrangement of bits of guanine, adenine, thymine, and cytosine along a genome. Researchers are increasingly adopting synthetic approaches to the study of phenomena, particularly in studying life, where it has been noted that, "Reductionism does not work with complex systems, and it is now clear that a purely reductionist approach cannot be applied when studying life: in living systems the whole is more than the sum of its parts." (Stephen Levy, Artificial Life) These methodologies are also extending into other fields, including the social sciences and urban studies.

Why treat cities as complex systems?
There are many reasons why we might transfer these ideas to our understanding and conceptualization of cities. From the local-scale interactive behavior (commuting, moving) of many individual objects (vehicles, people), structured and ordered patterns emerge in the aggregate, such as peak-hour traffic congestion and the large-scale spatial clustering of socioeconomic groups by residence. In urban economics, large-scale economies of agglomeration and disagglomeration have long been understood to operate from local-scale interactive dynamics. Also, cities exhibit several of the signature characteristics of complexity, including fractal dimensionality and self-similarity across scales, self-organization, and emergence.

Criticisms of complexity
Complexity studies are in their infancy as an academic discipline, but they have drawn a relatively heavy degree of criticism recently, perhaps as a by-product of the attention afforded the field and its pioneers in popular science journalism and publishing. In particular, there have been accusations that a gap exists between the 'rhetoric' of complexity studies and reality. This is really a multifaceted reaction against complexity studies. The field has been criticized for harboring a 'reminiscence syndrome'. Also, there has been a backlash against the claim of some complexity researchers, particularly those at the flagship Santa Fe Institute, that complexity can offer a unified theory of everything. Moreover, there have been growing concerns that the techniques the field is offering up for the study of complexity are even more complicated than the phenomena they purport to represent; that researchers are moving from complexity to perplexity.


What was once thought to be the great strength of complexity has turned into one of its chief criticisms. The intuitive sense that the idea of complexity conjures owes a great deal to the idea of reminiscence: "Look, isn't this reminiscent of a biological or physical phenomenon." (Jack D. Cowan, co-founder of the Santa Fe Institute, quoted in Scientific American, 1995). Reminiscence criticisms accuse researchers of yielding to the "seductive syllogism" of complexity, particularly in the use of computer-based models to explore complexity. Just because the dynamic activity displayed in a computer model resembles a real-life process, does not necessarily mean that it is a good model for that phenomenon. Researchers may assume that reminiscence alone is justification for a modeling paradigm, when really that reminiscence may be accidental, coincidental, or may be a construct of the researcher's own ideas. Others would defend themselves by countering that while complexity may be guilty of reminiscence, the mechanisms of processing in naturally appearing complex systems are very like those in computers, and particularly in CA.


One of the goals of complexity studies is to abstract simple features of complex behavior that are common across a wide-range of systems, and perhaps to devise universal laws of complex systems from those common principles. As Stephen Wolfram puts it, "To discover and analyze the mathematical basis for the generation of complexity, one must identify simple mathematical systems that capture the essence of the process." Wolfram goes on to speculate that universal laws analogous to the laws of thermodynamics might be discovered for complex systems. However, there has been a backlash against the claims for a unifying theory of complex systems. Contrast Wolfram's sentiments with those of John Casti, expressed in the introduction to his book, Would-be Worlds: "it's really a pity that this book is not crammed full of mathematical arcana, since if it were it could only mean that we had something that looked like a decent theory of complex systems. In fact, we are not even close." There are two justifications for doubting our ability to arrive at universal laws of complexity, both of which center on the use of computers to explore complex phenomena. The first relates to the fact that some problems are not computable. The second centers on a belief that complexity models may be more complex in themselves than the phenomena that they are trying to simulate.


By their very nature, computing machines are rule following devices; yet, there is no reason to believe that all processes in the natural world are rule-based. Some processes in the natural and physical worlds, and many complex systems, may not be computable. In geography and urban planning, the introduction of simulation techniques from complexity studies was heralded with suspicion. In particular, researchers feared that tinkering with the simple formalisms of techniques such as CA in order to better tailor them to simulating geographic phenomena might yield model structures as complicated as the realities that they were designed to represent. In a true simulation model, the inputs and states of the real-world object must be encoded in the states of the simulated phenomena. Consequently, the simulated phenomena will have to have more states than the real-world object, and thus the simulation must by necessity be more complicated than the thing(s) being simulated. The danger here is that in designing accurate models of complex systems, we may end up with simulations that can be no better understood than the systems that they simulate.


The criticisms of complexity are appropriate in many instances. Yet, to reject complexity outright at this stage would be unwise; the field has a lot to offer. Really, the important message to understand here is that complexity has relevance to many systems, but not to all. This is also true in the context of the city. Many urban systems lend themselves to the complexity approach, but others-especially those that operate from the top-down-really don't. Nevertheless, the approach does provide a rich environment for understanding how systems work dynamically and interactively, as well as offering some innovative techniques for simulating such phenomena.

 

Projects >>

Dynamic physics for built infrastructure

moving agents through space and time

Moving agents through space and time

modeling riots

Modeling riots

Validating agent-based models

Machine-learning behavioral geography

Accelerating agent-based models

megacity models

Megacity futures

Immersive modeling

Space-time GIS and analysis

A toolkit for measuring sprawl

space-time GIS

Modeling time, space, and behavior

simulating crowd behavior

Simulating crowd behavior