Our mission is to help computational modelers at all levels engage in the establishment and adoption of community standards and good practices for developing and sharing computational models. Model authors can freely publish their model source code in the Computational Model Library alongside narrative documentation, open science metadata, and other emerging open science norms that facilitate software citation, reproducibility, interoperability, and reuse. Model authors can also request peer review of their computational models to receive a DOI.
All users of models published in the library must cite model authors when they use and benefit from their code.
Please check out our model publishing tutorial and contact us if you have any questions or concerns about publishing your model(s) in the Computational Model Library.
We also maintain a curated database of over 7500 publications of agent-based and individual based models with additional detailed metadata on availability of code and bibliometric information on the landscape of ABM/IBM publications that we welcome you to explore.
Displaying 10 of 332 results for "Tim Dorscheidt" clear search
The purpose of the Credit and debt market of low-income families model is to help the user examine how the financial market of low-income families works.
The model is calibrated based on real-time data which was collected in a small disadvantaged village in Hungary it contains 159 households’ social network and attributes data.
The simulation models the households’ money liquidity, expenses and revenue structures as well as the formal and informal loan institutions based on their network connections. The model forms an intertwined system integrated in the families’ local socioeconomic context through which families handle financial crises and overcome their livelihood challenges from one month to another.
The simulation-based on the abstract model of low-income families’ financial survival system at the bottom of the pyramid, which was described in following the papers:
…
The intention of this model is to create an universal basis on how to model change in value prioritizations within social simulation. This model illustrates the designing of heterogeneous populations within agent-based social simulations by equipping agents with Dynamic Value-based Cognitive Architectures (DVCA-model). The DVCA-model uses the psychological theories on values by Schwartz (2012) and character traits by McCrae and Costa (2008) to create an unique trait- and value prioritization system for each individual. Furthermore, the DVCA-model simulates the impact of both social persuasion and life-events (e.g. information, experience) on the value systems of individuals by introducing the innovative concept of perception thermometers. Perception thermometers, controlled by the character traits, operate as buffers between the internal value prioritizations of agents and their external interactions. By introducing the concept of perception thermometers, the DVCA-model allows to study the dynamics of individual value prioritizations under a variety of internal and external perturbations over extensive time periods. Possible applications are the use of the DVCA-model within artificial sociality, opinion dynamics, social learning modelling, behavior selection algorithms and social-economic modelling.
Knowledge Based Economy (KBE) is an artificial economy where firms placed in geographical space develop original knowledge, imitate one another and eventually recombine pieces of knowledge. In KBE, consumer value arises from the capability of certain pieces of knowledge to bridge between existing items (e.g., Steve Jobs illustrated the first smartphone explaining that you could make a call with it, but also listen to music and navigate the Internet). Since KBE includes a mechanism for the generation of value, it works without utility functions and does not need to model market exchanges.
I model a forest and a community of loggers. Agents follow different kinds of rules in order to log. I compare the impact of endogenous and of exogenous institutions on the state of the forest and on the profit of the users, representing different scenarios of participatory conservation projects.
Signaling chains are a special case of Lewis’ signaling games on networks. In a signaling chain, a sender tries to send a single unit of information to a receiver through a chain of players that do not share a common signaling system.
MOOvPOPsurveillance was developed as a tool for wildlife agencies to guide collection and analysis of disease surveillance data that relies on non-probabilistic methods like harvest-based sampling.
The TERROIR agent-based model was built for the multi-level analysis of biomass and nutrient flows within agro-sylvo-pastoral villages in West Africa. It explicitly takes into account both human organization and spatial extension of such flows.
Next generation of the CHALMS model applied to a coastal setting to investigate the effects of subjective risk perception and salience decision-making on adaptive behavior by residents.
The simulation is a variant of the “ToRealSim OD variants - base v2.7” base model, which is based on the standard DW opinion dynamics model (but with the differences that rather than one agent per tick randomly influencing another, all agents randomly influence one other per tick - this seems to make no difference to the outcomes other than to scale simulation time). Influence can be made one-way by turning off the two-way? switch
Various additional variations and sources of noise are possible to test robustness of outcomes to these (compared to DW model).
In this version agent opinions change following the empirical data collected in some experiments (Takács et al 2016).
Such an algorithm leaves no role for the uncertainties in other OD models. [Indeed the data from (Takács et al 2016) indicates that there can be influence even when opinion differences are large - which violates a core assumption of these]. However to allow better comparison with other such models there is a with-un? switch which allows uncertainties to come into play. If this is on, then influence (according to above algorithm) is only calculated if the opinion difference is less than the uncertainty. If an agent is influenced uncertainties are modified in the same way as standard DW models.
This paper investigates the impact of agents' trading decisions on market liquidity and transactional efficiency in markets for illiquid (hard-to-trade) assets. Drawing on a unique order book dataset from the fine wine exchange Liv-ex, we offer novel insights into liquidity dynamics in illiquid markets. Using an agent-based framework, we assess the adequacy of conventional liquidity measures in capturing market liquidity and transactional efficiency. Our main findings reveal that conventional liquidity measures, such as the number of bids, asks, new bids and new asks, may not accurately represent overall transactional efficiency. Instead, volume (measured by the number of trades) and relative spread measures may be more appropriate indicators of liquidity within the context of illiquid markets. Furthermore, our simulations demonstrate that a greater number of traders participating in the market correlates with an increased efficiency in trade execution, while wider trader-set margins may decrease the transactional efficiency. Interestingly, the trading period of the agents appears to have a significant impact on trade execution. This suggests that granting market participants additional time for trading (for example, through the support of automated trading systems) can enhance transactional efficiency within illiquid markets. These insights offer practical implications for market participants and policymakers aiming to optimise market functioning and liquidity.
Displaying 10 of 332 results for "Tim Dorscheidt" clear search