Agents, Crowds, Architectures

picture-1“For seeing life is but a motion of limbs, the beginning whereof is in some principal part within, why may we not say that all automata (engines that move themselves by springs and wheels as doth a watch) have an artificial life?”
-Thomas Hobbes, Leviathan, 1660.

For sheer uncanny sci-fi weirdness, nothing tops reading the abstracts for the funded projects on the Defense Science Board’s website.  The DSB is the public face of funding for academic research relevant to DARPA – the Defense Advanced Research Projects Agency.  You may know DARPA from such projects as directed heat-ray weaponry, sub-vocalization detection, passive radar, the Internet, etc.  DARPA is not as cloak-and-dagger as it may seem from its ominous name; in the defense world, it’s the Pentagon’s way of funding those wacky little “off the wall” projects that might not otherwise receive support.  You know, those cute little defenseless defense projects?  DARPA likes to call those “strategic technology vectors.”   This year, one of the main strategic vectors being pushed forward by the Pentagon is in a field called “agent modeling” or “crowd dynamics.”  DARPA has various terms for this line of research, from crowd theory to “human terrain mapping” to “social simulation.”  You can think of this broadly as the science of individual and collective behavior situated in an environment.

Shortly after the US-led invasion of Iraq, members of the US military entrusted with coordinating crowd control and counter-insurgency measures were met with the problem of navigating and intervening in unknown social and political territory.  Models of collective human behavior were thought critical to effective planning and “logistics.”   This is not the first time that the Pentagon has decided to focus on quantitative social sciences of crowds and collective behavior.  During the Vietnam War, DARPA launched an ambitious endeavor called “Project Camelot.”  DARPA’s director, R.L. Sproul, testified before congress that “it is [our] primary thesis that remote area warfare is controlled in a major way by the environment in which the warfare occurs; by the sociological and anthropological characteristics of the people involved in the war.” (McFate, 2005).  Project Camelot was tested in Chile, but was met with such local resistance and negative press domestically that Secretary of Defense Robert McNamara cancelled the program.  Much has happened since Sproul’s time and, as of 2003, this line of research is back, with new tools and new funding.

What is in question here is simulation, more specifically the simulation of people and crowds.  Simulation of physical systems is now making in-roads into architectural practice.  The facility to simulate natural processes is greatly aided by the low cost and ubiquity of computation.  The ability to simulate lighting conditions, thermal properties, acoustic effects, and structural stability, are all becoming part of sustainable design practice.  Thermal simulation (employing software such as Ecotect) can give us an accurate picture of how a space will behave under different heating/cooling and seasonal conditions, with varying numbers of bodies occupying a space.  Physically-based rendering (using Radiance) can produce images which actually contain data about light.  But behavior of light in a space is fundamentally different from the behavior of people…right?

Simulating human behavior is nothing terribly novel.  Recently, the notion of computational simulated agents has made its way into popular culture.  The Sims by Electronic Arts – the best selling computer game of all time – casts the player as the omniscient controller of a family of simulated agents.  Massive – a software tool developed for the film trilogy The Lord of the Rings – allows the user to simulate a “massive” crowd of autonomous agents who interact and exhibit complex emergent behavior.  However, these games, tools and visualizations are making their way into design practice.  What does this mean for architecture?  Imagine your SketchUp model populated with hundreds of animated characters.  Instead of the little outlined silhouettes frozen in mid-stride, exploratory agents walk through, inhabit, use, abuse and dwell in your design.

How does this work?  Just how predictable are you?  So many theories, so little time.  Biologists have long been fascinated with collective behavior in the animal kingdom.  Flocks, swarms, schools, and herds all display the hallmarks of collective emergent organization springing from the application of simple rules to large systems.  Consider two models of your behavior: “top-down” and “bottom-up.”  The top-down view sees your behavior as a direct result of the layout of an environment.  The bottom-up approach casts an person’s behavior as the result of a variable-juggling cognitively calculated reaction to input about an environment.  Both represent you as a convenient computational abstraction.  You are not you. You are an agent, within a system.  How you behave is entirely up to the system.

This brings up a number of theoretical and technical questions: What behavior should the agent simulate?  Does the agent exhibit this behavior?  Do humans behave in the same way?  How do groups of humans behave?  Do models exhibit these group behaviors?  Can models capture something beyond simply behavior? Can they capture emotion?  Mood?  Cognitive process?    What does this have to do with architecture?  Just how predictable are people?  Should we model agents and crowds at all?  Putting aside the final normative questions for the moment, let’s first consider the top-down approach.

“Great bodies of people are never responsible for what they do.”
-Virginia Woolf, A Room of One’s Own, 1929.

Reason.  Rationality.  “Higher thought.”  For the moment, let’s pretend they are of little interest.  Even if reason tends to reassert itself – as it has a tendency to do – let’s treat it obliquely.  What we are interested in is what people tend to do in a place.  Specifically, what is of interest is what groups of people tend to do.  This approach, often called “crowd -based” simulation, works from the top-down.

Though now rather dated and a bit cheesy, the introduction to William H. Whyte’s film on the “Social Life of Small Urban Spaces” (1980) is well worth viewing.  There is something fascinating about watching the fast-forward footage of individuals, groups and crowds moving with the sun across the square of van der Rohe’s Seagram building in New York.  What is of particular interest in Whyte’s observations – as functionalist and analytic as they may seem – is the interplay of collective behavior, the built environment, and physical conditions.  Even if his conclusions were bottom-up, Whyte began his observations literally from the top-down, looking down on the plaza below.

A recent example of the figurative top-down method comes from the UW’s Computer Science and Engineering department.  Treuille et al’s work on “Continuum Crowds” can generate and simulate masses incorporating many thousands of individuals.  Many crowd simulators work from the bottom-up.  Such systems simulate the decisions and movements of each individual, which is computationally costly.  The UW team’s alternative, top-down method, can produce a realistic simulation movement of a large, retreating army, for example.  Treuille et al’s crowd-simulation system casts a crowd as a collection of particles with a specific goal – to get to a certain place. The simulation propels the “individuals” towards their goal while taking into account an ambient “discomfort field.”  Think of this discomfort field as your own personal somatic comfort zone.  This prevents the agents stampeding or crashing together.  However, in this model of behavior, there is not explicit collision avoidance built-in to the agents themselves; rather, their behavior is the direct result of the structure of the environment and the behavior of other people.  (Think of the path rain-water runoff: its fluid path is determined by gravity and the shape of the environment).  The environment is a dynamic potential field – with attractor and repeller states – pushing and pulling the “agents” through the space.  This method is computationally efficient, as is scales well to large crowds and gives visually compelling results in real-time.

There is an unashamed casting aside of the individual in this approach.  To be fair, the aim of these top-down methods is not to figure out what is going on in an individual’s head while shopping, but to produce a realistic visual representation of crowd behavior in a short amount of time.  But one can easily image how such a simulation could be applied to fire-egress (“need to know”), space layout planning (“good to know”), and crowd-control simulations (“big brother is watching”).  The top-down approach casts “us” as a swirling dance of a bubbles in boiling water.  Technically speaking, one of the short-comings of this approach is dealing with dynamic environments.  In most top-down methods, structure of the environment is “precomputed” as the parameters of the environment largely determine the resultant emergent behavior.  Changing the environment on the fly is computationally expensive as the numbers must be crunched yet again.   Tacit with this approach is the notion that behavior is a direct result of the environment and interaction those within it.  This approach – as elegant as it may seem – differs in theory and practice from the “bottom-up” method…

“L’homme est une machine.”
(Man is a machine.)            -Julien Offray de La Mettrie, L’homme machine, 1748.

As previously eluded to, many crowd simulators work from the bottom-up.  This is also known as the agent-based approach.  Such systems simulate the decisions and movements of each individual or agent.  This method gives the researcher (or designer) fine-grained control over the inner-machinations and cognitive variables of the agent system, but can result in extremely unpredicted emergent behavior.  Tweaking the variables within an agent can lead to very different results.  In the early days of the agent-based approach – when computers were slow and expensive – scale was a huge issue.  It was very important to choose the variables wisely, as the simulations could take days or weeks to run.  However, with today’s cheap, fast and ubiquitous computing resources, these same simulations can be run in real-time.  This allows the researcher or designer to employ guess-and-check, trial-and-error approach.

Paul Torrens, a geography professor at Arizona State University, has created a software toolkit for this exact application.  One can import 3D geometry, define the rules that drive each agent, and see how the autonomous “individuals” interact in a given context.  In the bottom-up method, each individual agent is built-up from many stacked levels of rules: a subsumption architecture of sorts.  Starting with the simple kinematics of bodily joints, then the physics of individual movement, then the basic navigational heuristics, followed by rules governing social behaviors, etc, the agent modeler defines these rules at a given level, then releases the agents into a 2D or 3D environment and watches how they behave.

Saunders and Gero (2001) – of the University of Sydney – have proposed a framework for modeling curious agents.  These agents are programmed to prefer certain patterns and get bored with the same stimuli if encountered over and over.  Developing this model (2004), Saunders and Gero tweaked their curious agents to perform a “situated design evaluation.”  The test case was a virtual art gallery.  The plan of the gallery has four rooms and only one entrance and one exit.  On each of the virtual walls are hung different “artworks.”   These works are actually only simple R, G, B color values…think of it as a 32-bit Mark Rothko exhibition.  Each of the curious agents is programmed to randomly prefer one or another of these colors, but is curious about related colors.  Casting glances into each rooms and moving toward works it is attracted to, the agents move through the gallery.  Simply randomly arranging the order and hanging location of the paintings causes an uneven dispersal of agents in the entry and the exit.  The agents crowd together, but don’t really move into the subsequent rooms (this would come as no surprise to a professional curator.)  However, these agents then gave feedback to the space-layout algorithm, so in the next iteration of the gallery the artwork was better distributed.  The agents were then re-released into the gallery and gave their “feedback.”  This feedback was a plotted graph of their boredom with time: the less bored, the better; the less crowded, the better.  The gallery was then tweaked again and again.  The curious agents illustrate that extremely banal parameters – a measure of curiousity and a metric of boredom – can provide strong feedback for space layout planning.  While these agents do not appear human in their basic navigation behaviors or social norms, they display a startling tendency to behave appropriately within a given context.  The mechanics of their grossly simplified cognitive process (“I like blue”, “I’m bored with green”) are completely transparent to the researcher and the designer.  Furthermore, the resulting floorplans and layouts could be construed as generative in some sense.  This is the beginning of a parametric cognitive design logic.

Recent advances in cognitive science have lead to models of simple activities (such as walking past obstacles toward a goal) that strongly correlate with observed patterns (Fajen and Warren for example).  These deceptively simple models are not deterministic – but stochastic.  They paint a picture of human behavior that is (partially) constrained by non-linear dynamics, not governed by clockwork mechanics.  These approaches can accurately capture basic everyday activities and can even account for some individual and cultural traits.  For example, several visually-guided steering models have been created that can capture behavioral difference due to shoulder-width – a football player model that makes its way through a crowd differently – or even cultural habit – a UK, Japanese, or New Zealand steering model that “prefers” the left side of the road.  In other words, higher-order variables can be captured.  For designers, this ought to be quite interesting and disquieting.

“Verum et factum convertuntur.”
The true and the made are convertible.
-Giambattista Vico, De nostri temporis studiorum ratione, 1709.

Human behavior as a cybernetic concept has deep roots in a mechanistic view of man.  Norbert Wiener, the mathematician, engineer and social philosopher, coined the word “cybernetics” from the Greek word meaning steersman.  He defined it as the science of communication and control in the animal and the machine.  The term has since fallen out of style in mainstream cognitive sciences, perhaps because of its overt control connotation.  Cybernetics arose out of dissatisfaction with the empirical psychology of the 20th century.  The rise of a mechanization view of human thought coincided with the advent of mechanized and automated computation.  Cybernetics can be thought of as an attempt to understand machines through analogies to organisms; making machines more adaptive, flexible and more in tune with given environments and operatives in order to deal with the increasing complexity of the world.  There is a notable inversion of mechanism and organism.

It is perhaps because of this inversion that agent and crowd modeling have generated a certain defensive subculture at the fringes of architectural research.  The justification behind the research thread might be captured in a question: “If we can simulate human behavior – walking through a shopping mall or fleeing a fire, for example – could it be used to aid in design and planning?”   Legion works with the AEC industry, using proprietary agent simulation software, to do just this.  Only a handful of large design firms are currently working on and with agent simulation software.  Admittedly, the current application of such software is limited to mundane tasks like space-layout planning for shopping malls and airports, way-finding in sports arenas.  However, even with such pedestrian tasks, these firms are finding it important to describe and model the higher-order processes of pedestrian way finding such as route choice, congestion avoidance, direction following, and dwelling.  At the current state of domesticated computing (read: you don’t have access to a super-computer), these systems are able to compute the decisions and trajectories of over 10,000 people (or around 700-800 in real-time).  What is interesting is that agent-modeling and crowd simulation are moving out of the realm of laboratory curiosity and into the realm of design practice.  This ought to raise some red flags within the discipline.  There seems to be a danger in assuming that human behavior is entirely predictable and mechanical.  The radical notion that human behavior might conform to such patterns and forms, and be quantified in such a way, was (and still is) one of the most troubling and powerful theoretical tenets of cybernetics.  Considering this view, there is a tendency to lapse into a morbid determinism. Even the most crude of “man-as-black-box” theories – behaviorism – did little to dissuade anyone (including some Modernists) from this mechanistic view of humanity.

As in science, as in architecture, models are a tool for communication, for testing theories, for gaining insight.  The mapping between model and construction, however, is slightly different in modern technoscience.  In architectural practice, a model of a building tells you something about a possible building, but one does not (hopefully) mistake it for the finished building.  The framework of reductionist science is simplified models of complex systems.  Modern medicine has a simplified model of human anatomy that it uses to generalize the specificity of the human body during surgery.  Epidemiology relies on a simplified model of human behavior to model the spread of disease and craft interventions to prevent catastrophe.  It is the beginning of understanding.  It is not the end, but a process.  But as teleological as modern science might seem (a large debate in itself), it is undergoing a fundamental shift.  This shift is largely due to the speed and accuracy with which models can be created and explored.  Architecture, as a practice, is increasingly employing simulation in the iterative process of design.  With increased physical simulation, major gains will be made in terms of sustainability.  However, what role ought simulation play in the process, specifically with regard to behavioral modeling?

One danger is that these models can begin to become self-fulfilling.  While this may sound strange, when the model is posited as a yardstick for measuring “normal” behavior, there is a danger that we (fallible humans that we are) will begin making the world more like the model rather than tweaking the model to fit reality.   How pernicious is a simplified model of individual behavior?    Is simulation of a physical system fundamentally different from that of behavioral systems?  What models ought we employ?  A generic model?  A cultural model?  A consumer model?  Which is more frightening, government or corporate misuse?  Again, we return to the same questions.

Much of our world is designed with group dynamics in mind.  Architects and Urban Planners are implicated in this design.  Some systems are overtly about control (traffic lights are there for a good reason) and some designs are more subtle (public spaces designed with CCTV in mind).   It ought to be clear by now that control and the circumvention of behavior presumes a wholly deterministic system.  We ought not be too interested in this issue of determinism.  Rather than lapse into atavistic arguments about the role of free will, we should seek to understand this emerging undercurrent of simulation in its entirety.  What is at stake is agency as designers, not free will as individuals.  Rather than flinch at the notion of a predictive model of crowd or human behavior, I believe architects and planners should actively engage these emerging technologies, use them, tweak them, question and critique them, shape and subvert them.

One thought

  1. Undoubtedly you have come across it: One of the more interesting agent based modelling environments is BOTworld, as used with EnviMET.

    “BOTworld is a Multi-Agent simulation system, predicting the behaviour and movement of pedestrians in urban areas under the influence of different environmental factors (urban layout, sources of traffic, air quality and micro climate)”.

    The very sophisticated software is still free. See:
    http://www.botworld.info/
    and
    http://www.envi-met.com/

Leave a Reply