Social Simulation and the Scientific Method « Rama Hoetzlein

Social Simulation and the Scientific Method

Can social simulations be created which match particular human behaviors? Is the scientific method of data validation applicable to agent-based simulations? I consider these questions here briefly.

One the one hand, agent simulations have been developed which do match particular examples of human behavior – models of human trade, cooperation, etc. Yet the primary criticism of these, is that the models were developed specifically to respond to the examples being studied. Thus, the classical process of collecting data, building a model, and validating relative to the data, seems to be problematic in the social sciences since the range of both the data and the models is so vast. Collecting large amounts of data seems more reasonable, but then there are biases in what to include and what not to, since the whole of human history essentially represents the full scope of the data. Is a “scientific” approach to social simulations even possible? The bias of the modeler is the primary criticism of such an inquiry.

Yet consider: We evaluate others based on their beliefs, their context, and their constraints, rather than by comparison to some “ideal data”. More importantly, when in doubt about human behavior we do not compare other people to some perfect human since no such example exists, but rather ask for a justification in behavior since we know that no two circumstances can be identical. In other words, we know that the behaviors of people match only statistically, but never in specific detais. Unlike natural systems (like fluids), collecting data on particular examples of people makes them increasingly unique the more data one collects.

In light of this, we should not expect semantic agent simulations to exactly match particular real world people. Why? First, because the data surrounding the circumstances of a particular individual or group behavior is hidden from us. Historians themselves have trouble recording all the subtle circumstances of a particular individuals outward decisions, while frequently those individuals most difficult to understand may have the most influential outcome (e.g. presidents, etc). Will we every be able to simulate the exact actions of Woodrow Wilson during World War I? Not without telling a machine the precise circumstances of his decisions.

Secondly, because matching a particular example only tells us how to model that particular behavior, it does not tell us how to model human behavior generally, which is infinitely connected with all other aspect of behavior. Consider attempting to model “exchange of possession”, or trade. While seemingly simple, this issue of physical trade immediately leads to behavioral questions in trust, perception, value estimation, cultural background, and ultimately belief. Any simple human example is deeply connected with all levels of human behavior.

How then do we validate social simulations? First, we notice that human beings can have any range of behavior depending on circumstances and belief. A seeming contradiction can appear with an example of individuals or groups supporting both a behavior and its opposite in the right conditions. For example, consider a society with few resources. It may a) expend additional energy to hunt for food, or b) it may conserve energy and wait for a better time. Which path is taken depends entirely on the belief system of the group. Thus, to attempt to match a particular model to some particular observed data is meaningless is social simulation.

We need another way to evaluate social simulations. Since a social simulation is an attempt to mimic the behavior of human beings, a more natural way would be to evaluate them similarly to the way we evaluate the behavior of other humans: that is, we can demand a reasonable explanation of behavior given a set of circumstances and internal constraints and beliefs.

In effect, the best measure of a social simulation is to provide a means for the system to justify the behavior of its agents. Do they behave reasonably under their given constraints and beliefs? For example, consider a social simulation in which an agent gives away all of its possessions. We should at least except a (semantic) simulation model to give some explanation for this. If the system responds: “The individual observed that possessions makes one greedy. Greed leads to unhappiness. The individual believes he/she can survive from the graciousness of others. Individual wanted to be happy. So they gave away all possessions.”. We can reason this to be a correct model based on the justification provided, since the explanation sufficiently describes the actions of the individual, and we can claim to have modeled co-dependence to a certain extent. This shows the importance of a simulation being able to justify and reveal hidden agent behavior. The bias of the model developer is revealed by allowing the simulation to justify individual/group actions, and its flexibility is shown in the subtlety of its response.

Note that there is no one right answer, not even a rational one necessarily. What if the response truly is irrational? An abberant individual in reality could kill another, leading to his/her execution, which is clearly not of benefit to the individual. Yet a simulated system should, in the context of an irrational belief system, be able to at least show that this was the result of irrational thinking (regardless of whether the agent is “aware” of their irrationality or not). It is thus not so much the rationality or irrationality of the justification, but simply the fact that the system is capable of self-justification.

When the explanations are not reasonable the result is still beneficial, since it will point out fallacies in the model. Consider another system-justified response to the individual who gives away everything: “The individual had many possessions. The possessions were valuable to him/her, so the individual gave them away.” Notice the statement said they were “valuable”, yet given away. This example suggests to us either a fallacy in reporting – perhaps there is a missing step – or if we assume the reporting is accurate, then a clear error in the model of blatantly misrepresenting common sense. In the first case, the error reporting must simply be made clearer: perhaps there is a missing step that the individual valued the objects but they reminded him/her of a painful experience, so were given away. In the second case, the model must simply be corrected: valuable things are not given away without reason. In effect, both outcomes are helpful to objective review, since the behavior justification identifies how the model should be improved irrespective of the biases introduced by the model builder.

Consider the real world. A person gives away their possessions. Depending on the culture, the context, and the individual, and due to the complexity of human behavior we might accept any explanation so long as it is reasonable in context. Thus, unlike physical simulations, there is no one correct solution to which we could compare. Human behavior is incredibly unique, and also vastly complex. This makes the ability to provide a reasoned justification essential. Behavior justification validates social simulations in both positive and negative cases, since in each case it provides a context from which an independent human observer can determine the extent, or limitations, of a particular model.

The development of social simulations has largely been hindered by the inherent biases of the modelers, and criticisms of its inability to adhere to the scientific method. Yet the “scientific method” of comparing a simulated model to real world data is an unreasonable expectation given the range of human behavior, for precisely the reason that – unlike physical processes – the same real world human behavior can have an infinite number of possible explanations. The world of human experience is our data, and selecting particular events, people, or circumstances cannot be separated, even in a simple way, from the deeper apsects of culture and belief. Instead, the measure of a scientifically valid social simulation is one which is capable of justifying the behavior of its agents – allowing independent human beings to judge how and if the model is reasonable. This effectively reformulates social simulation as a Turing test, which is in essence what social simulation is: a testable model of human behavior, directly testable by us (other humans).

Leave a Reply

Your email address will not be published. Required fields are marked *


Reload Image

Web Design MymensinghPremium WordPress ThemesWeb Development

New Website Launched

November 12, 2014November 12, 2014
This new personal website contains previously unpublished videos, updated content, and more on teaching materials and current research.

Fluids v.3.1 presented at GPU Technology Conference 2014

March 7, 2014March 7, 2014
Novel methods for fluid simulation were presented at the GPU Technology Conference, in Santa Clara, CA. The new Fluids v.3.1 can simulate over 4 million particles at 4 fps, is easier to build, and runs on CUDA 5.0 and 5.5.

Starting Position as Graphics Developer with NVIDIA

January 1, 2013January 1, 2013
nvidia_logoThis January, 2013, I start a new position in Graphics Research at NVIDIA Corporation in Santa Clara, CA, working with CAD clients to develop GPU technologies, and exploring topics in computer graphics.