Our mission is to help computational modelers develop, document, and share their computational models in accordance with community standards and good open science and software engineering practices. Model authors can publish their model source code in the Computational Model Library with narrative documentation as well as metadata that supports open science and emerging norms that facilitate software citation, computational reproducibility / frictionless reuse, and interoperability. Model authors can also request private peer review of their computational models. Models that pass peer review receive a DOI once published.
All users of models published in the library must cite model authors when they use and benefit from their code.
Please check out our model publishing tutorial and feel free to contact us if you have any questions or concerns about publishing your model(s) in the Computational Model Library.
We also maintain a curated database of over 7500 publications of agent-based and individual based models with detailed metadata on availability of code and bibliometric information on the landscape of ABM/IBM publications that we welcome you to explore.
Displaying 10 of 82 results for "Isaque Daniel Rocha Eberhardt" clear search
The set of models test how receivers ability to accurately rank signalers under various ecological and behavioral contexts.
The AMMA simulates how news waves emerge in the mass media. Drawing on the ideas of public arena models and issue-attention cycles, it represents fundamental principles of public communication in a virtual media system.
SiFlo is an ABM dedicated to simulate flood events in urban areas. It considers the water flowing and the reaction of the inhabitants. The inhabitants would be able to perform different actions regarding the flood: protection (protect their house, their equipment and furniture…), evacuation (considering traffic model), get and give information (considering imperfect knowledge), etc. A special care was taken to model the inhabitant behavior: the inhabitants should be able to build complex reasoning, to have emotions, to follow or not instructions, to have incomplete knowledge about the flood, to interfere with other inhabitants, to find their way on the road network. The model integrates the closure of roads and the danger a flooded road can represent. Furthermore, it considers the state of the infrastructures and notably protection infrastructures as dyke. Then, it allows to simulate a dyke breaking.
The model intends to be generic and flexible whereas provide a fine geographic description of the case study. In this perspective, the model is able to directly import GIS data to reproduce any territory. The following sections expose the main elements of the model.
The model aims at estimating household energy consumption and the related greenhouse gas (GHG) emissions reduction based on the behavior of the individual household under different operationalizations of the Theory of Planned Behaviour (TPB).
The original model is developed as a tool to explore households decisions regarding solar panel investments and cumulative consequences of these individual choices (i.e. diffusion of PVs, regional emissions savings, monetary savings). We extend the model to explore a methodological question regarding an interpretation of qualitative concepts from social science theories, specifically Theory of Planned Behaviour in a formal code of quantitative agent-based models (ABMs). We develop 3 versions of the model: one TPB-based ABM designed by the authors and two alternatives inspired by the TPB-ABM of Schwarz and Ernst (2009) and the TPB-ABM of Rai and Robinson (2015). The model is implemented in NetLogo.
Our aim is to show effects of group living when only low-level cognition is assumed, such as pattern recognition needed for normal functioning, without assuming individuals have knowledge about others around them or warn them actively.
The model is of a group of vigilant foragers staying within a patch, under attack by a predator. The foragers use attentional scanning for predator detection, and flee after detection. This fleeing action constitutes a visual cue to danger, and can be received non-attentionally by others if it occurs within their limited visual field. The focus of this model is on the effectiveness of this non-attentional visual information reception.
A blind angle obstructing cue reception caused by behaviour can exist in front, morphology causes a blind angle in the back. These limitations are represented by two visual field shapes. The scan for predators is all-around, with distance-dependent detection; reception of flight cues is limited by visual field shape.
Initial parameters for instance: group sizes, movement, vision characteristics for predator detection and for cue reception. Captures (failure), number of times the information reached all individuals at the same time (All-fled, success), and several other effects of the visual settings are recorded.
We build a stylized model of a network of business angel investors and start-up entrepreneurs. Decisions are based on trust as a decision making tool under true uncertainty.
While the world’s total urban population continues to grow, this growth is not equal. Some cities are declining, resulting in urban shrinkage which is now a global phenomenon. Many problems emerge due to urban shrinkage including population loss, economic depression, vacant properties and the contraction of housing markets. To explore this issue, this paper presents an agent-based model stylized on spatially explicit data of Detroit Tri-county area, an area witnessing urban shrinkage. Specifically, the model examines how micro-level housing trades impact urban shrinkage by capturing interactions between sellers and buyers within different sub-housing markets. The stylized model results highlight not only how we can simulate housing transactions but the aggregate market conditions relating to urban shrinkage (i.e., the contraction of housing markets). To this end, the paper demonstrates the potential of simulation to explore urban shrinkage and potentially offers a means to test polices to alleviate this issue.
This is an extended replication of Abelson’s and Bernstein’s early computer simulation model of community referendum controversies which was originally published in 1963 and often cited, but seldom analysed in detail. This replication is in NetLogo 6.3.0, accompanied with an ODD+D protocol and class and sequence diagrams.
This replication replaces the original scales for attitude position and interest in the referendum issue which were distributed between 0 and 1 with values that are initialised according to a normal distribution with mean 0 and variance 1 to make simulation results easier compatible with scales derived from empirical data collected in surveys such as the European Value Study which often are derived via factor analysis or principal component analysis from the answers to sets of questions.
Another difference is that this model is not only run for Abelson’s and Bernstein’s ten week referendum campaign but for an arbitrary time in order that one can find out whether the distributions of attitude position and interest in the (still one-dimensional) issue stabilise in the long run.
This ABM simulates opinions on a topic (originally contested infrastructures) through the interactions between paired agents and based on the sociopsychological assumptions of social judgment theory (SJT; Sherif & Hovland, 1961).
The wisdom of the crowd refers to the phenomenon in which a group of individuals, each making independent decisions, can collectively arrive at highly accurate solutions—often more accurate than any individual within the group. This principle relies heavily on independence: if individual opinions are unbiased and uncorrelated, their errors tend to cancel out when averaged, reducing overall bias. However, in real-world social networks, individuals are often influenced by their neighbors, introducing correlations between decisions. Such social influence can amplify biases, disrupting the benefits of independent voting. This trade-off between independence and interdependence has striking parallels to ensemble learning methods in machine learning. Bagging (bootstrap aggregating) improves classification performance by combining independently trained weak learners, reducing bias. Boosting, on the other hand, explicitly introduces sequential dependence among learners, where each learner focuses on correcting the errors of its predecessors. This process can reinforce biases present in the data even if it reduces variance. Here, we introduce a new meta-algorithm, casting, which captures this biological and computational trade-off. Casting forms partially connected groups (“castes”) of weak learners that are internally linked through boosting, while the castes themselves remain independent and are aggregated using bagging. This creates a continuum between full independence (i.e., bagging) and full dependence (i.e., boosting). This method allows for the testing of model capabilities across values of the hyperparameter which controls connectedness. We specifically investigate classification tasks, but the method can be used for regression tasks as well. Ultimately, casting can provide insights for how real systems contend with classification problems.
Displaying 10 of 82 results for "Isaque Daniel Rocha Eberhardt" clear search